Skip to main content

How to track token usage

Prerequisites

This guide assumes familiarity with the following concepts:

This notebook goes over how to track your token usage for specific LLM calls. This is only implemented by some providers, including OpenAI.

Here's an example of tracking token usage for a single LLM call via a callback:

npm install @langchain/openai @langchain/core
import { OpenAI } from "@langchain/openai";

const llm = new OpenAI({
model: "gpt-3.5-turbo-instruct",
callbacks: [
{
handleLLMEnd(output) {
console.log(JSON.stringify(output, null, 2));
},
},
],
});

await llm.invoke("Tell me a joke.");

/*
{
"generations": [
[
{
"text": "\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything.",
"generationInfo": {
"finishReason": "stop",
"logprobs": null
}
}
]
],
"llmOutput": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 5,
"totalTokens": 19
}
}
}
*/

API Reference:

If this model is passed to a chain or agent that calls it multiple times, it will log an output each time.

Next steps​

You've now seen how to get token usage for supported LLM providers.

Next, check out the other how-to guides in this section, like how to implement your own custom LLM.


Was this page helpful?


You can also leave detailed feedback on GitHub.