Skip to main content

Ollama

caution

You are currently on a page documenting the use of Ollama models as text completion models. Many popular models available on Ollama are chat completion models.

You may be looking for this page instead.

This will help you get started with Ollama text completion models (LLMs) using LangChain. For detailed documentation on Ollama features and configuration options, please refer to the API reference.

Overview​

Integration details​

Ollama allows you to run open-source large language models, such as Llama 3, locally.

Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.

This example goes over how to use LangChain to interact with an Ollama-run Llama 2 7b instance. For a complete list of supported models and model variants, see the Ollama model library.

ClassPackageLocalSerializablePY supportPackage downloadsPackage latest
Ollama@langchain/ollamaβœ…βŒβœ…NPM - DownloadsNPM - Version

Setup​

To access Ollama embedding models you’ll need to follow these instructions to install Ollama, and install the @langchain/ollama integration package.

Credentials​

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_API_KEY="your-api-key"

Installation​

The LangChain Ollama integration lives in the @langchain/ollama package:

yarn add @langchain/ollama @langchain/core

Instantiation​

Now we can instantiate our model object and generate chat completions:

import { Ollama } from "@langchain/ollama";

const llm = new Ollama({
model: "llama3", // Default value
temperature: 0,
maxRetries: 2,
// other params...
});

Invocation​

const inputText = "Ollama is an AI company that ";

const completion = await llm.invoke(inputText);
completion;
I think you meant to say "Olivia" instead of "Ollama". Olivia is not a well-known AI company, but there are several other AI companies with similar names. Here are a few examples:

* Oliva AI: A startup that uses artificial intelligence to help businesses optimize their operations and improve customer experiences.
* Olivia Technologies: A company that develops AI-powered solutions for industries such as healthcare, finance, and education.
* Olivia.ai: A platform that uses AI to help businesses automate their workflows and improve productivity.

If you meant something else by "Ollama", please let me know and I'll do my best to help!

Chaining​

We can chain our completion model with a prompt template like so:

import { PromptTemplate } from "@langchain/core/prompts";

const prompt = PromptTemplate.fromTemplate(
"How to say {input} in {output_language}:\n"
);

const chain = prompt.pipe(llm);
await chain.invoke({
output_language: "German",
input: "I love programming.",
});
A programmer's passion!

In German, you can express your love for programming with the following phrases:

1. Ich liebe Programmieren: This is a direct translation of "I love programming."
2. Programmieren ist meine Leidenschaft: This means "Programming is my passion."
3. Ich bin total verliebt in Programmieren: This translates to "I'm totally in love with programming."
4. Programmieren macht mich glΓΌcklich: This phrase means "Programming makes me happy" or "I'm joyful when programming."

If you want to be more casual, you can use:

1. Ich bin ein Programmier-Fan: This is a playful way to say "I'm a fan of programming."
2. Programmieren ist mein Ding: This translates to "Programming is my thing" or "I'm all about programming."

Remember that German has different forms for formal and informal speech, so adjust the phrases according to your relationship with the person you're speaking to!

Multimodal models​

Ollama supports open source multimodal models like LLaVA in versions 0.1.15 and up. You can bind base64 encoded image data to multimodal-capable models to use as context like this:

import { Ollama } from "@langchain/ollama";
import * as fs from "node:fs/promises";

const imageData = await fs.readFile("../../../../../examples/hotdog.jpg");

const model = new Ollama({
model: "llava",
}).bind({
images: [imageData.toString("base64")],
});

const res = await model.invoke("What's in this image?");
console.log(res);
 The image shows a hot dog placed inside what appears to be a bun that has been specially prepared to resemble a hot dog bun. This is an example of a creative or novelty food item, where the bread used for the bun looks similar to a cooked hot dog itself, playing on the name "hot dog." The image also shows the typical garnishes like ketchup and mustard on the side.

API reference​

For detailed documentation of all Ollama features and configurations head to the API reference


Was this page helpful?


You can also leave detailed feedback on GitHub.