Functionality related to Google Cloud Platform
Chat models
Gemini Models
Access Gemini models such as gemini-pro
and gemini-pro-vision
through the ChatGoogleGenerativeAI
,
or if using VertexAI, via the ChatVertexAI
class.
- GenAI
- VertexAI
- npm
- Yarn
- pnpm
npm install @langchain/google-genai
yarn add @langchain/google-genai
pnpm add @langchain/google-genai
Configure your API key.
export GOOGLE_API_KEY=your-api-key
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
const model = new ChatGoogleGenerativeAI({
model: "gemini-pro",
maxOutputTokens: 2048,
});
// Batch and stream are also supported
const res = await model.invoke([
[
"human",
"What would be a good company name for a company that makes colorful socks?",
],
]);
Gemini vision models support image inputs when providing a single human message. For example:
const visionModel = new ChatGoogleGenerativeAI({
model: "gemini-pro-vision",
maxOutputTokens: 2048,
});
const image = fs.readFileSync("./hotdog.jpg").toString("base64");
const input2 = [
new HumanMessage({
content: [
{
type: "text",
text: "Describe the following image.",
},
{
type: "image_url",
image_url: `data:image/png;base64,${image}`,
},
],
}),
];
const res = await visionModel.invoke(input2);
Click here for the @langchain/google-genai
specific integration docs
- npm
- Yarn
- pnpm
npm install @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Then, you'll need to add your service account credentials, either directly as a GOOGLE_VERTEX_AI_WEB_CREDENTIALS
environment variable:
GOOGLE_VERTEX_AI_WEB_CREDENTIALS={"type":"service_account","project_id":"YOUR_PROJECT-12345",...}
or as a file path:
GOOGLE_VERTEX_AI_WEB_CREDENTIALS_FILE=/path/to/your/credentials.json
import { ChatVertexAI } from "@langchain/google-vertexai";
// Or, if using the web entrypoint:
// import { ChatVertexAI } from "@langchain/google-vertexai-web";
const model = new ChatVertexAI({
model: "gemini-1.0-pro",
maxOutputTokens: 2048,
});
// Batch and stream are also supported
const res = await model.invoke([
[
"human",
"What would be a good company name for a company that makes colorful socks?",
],
]);
Gemini vision models support image inputs when providing a single human message. For example:
const visionModel = new ChatVertexAI({
model: "gemini-pro-vision",
maxOutputTokens: 2048,
});
const image = fs.readFileSync("./hotdog.png").toString("base64");
const input2 = [
new HumanMessage({
content: [
{
type: "text",
text: "Describe the following image.",
},
{
type: "image_url",
image_url: `data:image/png;base64,${image}`,
},
],
}),
];
const res = await visionModel.invoke(input2);
Click here for the @langchain/google-vertexai
specific integration docs
The value of image_url
must be a base64 encoded image (e.g., data:image/png;base64,abcd124
).
Vertex AI (Legacy)
Vector Store
Vertex AI Vector Search
Vertex AI Vector Search, formerly known as Vertex AI Matching Engine, provides the industry's leading high-scale low latency vector database. These vector databases are commonly referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.
import { MatchingEngine } from "langchain/vectorstores/googlevertexai";
Tools
Google Search
- Set up a Custom Search Engine, following these instructions
- Get an API Key and Custom Search Engine ID from the previous step, and set them as environment variables
GOOGLE_API_KEY
andGOOGLE_CSE_ID
respectively
There exists a GoogleCustomSearch
utility which wraps this API. To import this utility:
import { GoogleCustomSearch } from "langchain/tools";
We can easily load this wrapper as a Tool (to use with an Agent). We can do this with:
const tools = [new GoogleCustomSearch({})];
// Pass this variable into your agent.