Skip to main content

PineconeStore

Pinecone is a vector database that helps power AI for some of the world’s best companies.

This guide provides a quick overview for getting started with Pinecone vector stores. For detailed documentation of all PineconeStore features and configurations head to the API reference.

Overview​

Integration details​

ClassPackagePY supportPackage latest
PineconeStore@langchain/pineconeβœ…NPM - Version

Setup​

To use Pinecone vector stores, you’ll need to create a Pinecone account, initialize an index, and install the @langchain/pinecone integration package. You’ll also want to install the official Pinecone SDK to initialize a client to pass into the PineconeStore instance.

This guide will also use OpenAI embeddings, which require you to install the @langchain/openai integration package. You can also use other supported embeddings models if you wish.

yarn add @langchain/pinecone @langchain/openai @langchain/core @pinecone-database/pinecone

Credentials​

Sign up for a Pinecone account and create an index. Make sure the dimensions match those of the embeddings you want to use (the default is 1536 for OpenAI’s text-embedding-3-small). Once you’ve done this set the PINECONE_INDEX, PINECONE_API_KEY, and (optionally) PINECONE_ENVIRONMENT environment variables:

process.env.PINECONE_API_KEY = "your-pinecone-api-key";
process.env.PINECONE_INDEX = "your-pinecone-index";

// Optional
process.env.PINECONE_ENVIRONMENT = "your-pinecone-environment";

If you are using OpenAI embeddings for this guide, you’ll need to set your OpenAI key as well:

process.env.OPENAI_API_KEY = "YOUR_API_KEY";

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

// process.env.LANGCHAIN_TRACING_V2="true"
// process.env.LANGCHAIN_API_KEY="your-api-key"

Instantiation​

import { PineconeStore } from "@langchain/pinecone";
import { OpenAIEmbeddings } from "@langchain/openai";

import { Pinecone as PineconeClient } from "@pinecone-database/pinecone";

const embeddings = new OpenAIEmbeddings({
model: "text-embedding-3-small",
});

const pinecone = new PineconeClient();
// Will automatically read the PINECONE_API_KEY and PINECONE_ENVIRONMENT env vars
const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);

const vectorStore = await PineconeStore.fromExistingIndex(embeddings, {
pineconeIndex,
// Maximum number of batch requests to allow at once. Each batch is 1000 vectors.
maxConcurrency: 5,
// You can pass a namespace here too
// namespace: "foo",
});

Manage vector store​

Add items to vector store​

import type { Document } from "@langchain/core/documents";

const document1: Document = {
pageContent: "The powerhouse of the cell is the mitochondria",
metadata: { source: "https://example.com" },
};

const document2: Document = {
pageContent: "Buildings are made out of brick",
metadata: { source: "https://example.com" },
};

const document3: Document = {
pageContent: "Mitochondria are made out of lipids",
metadata: { source: "https://example.com" },
};

const document4: Document = {
pageContent: "The 2024 Olympics are in Paris",
metadata: { source: "https://example.com" },
};

const documents = [document1, document2, document3, document4];

await vectorStore.addDocuments(documents, { ids: ["1", "2", "3", "4"] });
[ '1', '2', '3', '4' ]

Note: After adding documents, there is a slight delay before they become queryable.

Delete items from vector store​

await vectorStore.delete({ ids: ["4"] });

Query vector store​

Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent.

Query directly​

Performing a simple similarity search can be done as follows:

// Optional filter
const filter = { source: "https://example.com" };

const similaritySearchResults = await vectorStore.similaritySearch(
"biology",
2,
filter
);

for (const doc of similaritySearchResults) {
console.log(`* ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`);
}
* The powerhouse of the cell is the mitochondria [{"source":"https://example.com"}]
* Mitochondria are made out of lipids [{"source":"https://example.com"}]

If you want to execute a similarity search and receive the corresponding scores you can run:

const similaritySearchWithScoreResults =
await vectorStore.similaritySearchWithScore("biology", 2, filter);

for (const [doc, score] of similaritySearchWithScoreResults) {
console.log(
`* [SIM=${score.toFixed(3)}] ${doc.pageContent} [${JSON.stringify(
doc.metadata
)}]`
);
}
* [SIM=0.165] The powerhouse of the cell is the mitochondria [{"source":"https://example.com"}]
* [SIM=0.148] Mitochondria are made out of lipids [{"source":"https://example.com"}]

Query by turning into retriever​

You can also transform the vector store into a retriever for easier usage in your chains.

const retriever = vectorStore.asRetriever({
// Optional filter
filter: filter,
k: 2,
});

await retriever.invoke("biology");
[
Document {
pageContent: 'The powerhouse of the cell is the mitochondria',
metadata: { source: 'https://example.com' },
id: undefined
},
Document {
pageContent: 'Mitochondria are made out of lipids',
metadata: { source: 'https://example.com' },
id: undefined
}
]

Usage for retrieval-augmented generation​

For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:

API reference​

For detailed documentation of all PineconeStore features and configurations head to the API reference.


Was this page helpful?


You can also leave detailed feedback on GitHub.