Example: LangChain Agent
Copy-paste this example into a Python file. Install dependencies and set environment variables.
What This Does
A LangChain agent that researches a topic and generates a summary, with automatic time and token tracking via Keito.
Prerequisites
- Python 3.10+
pip install keito-python langchain-anthropic langchain-coreKEITO_API_KEY,KEITO_ACCOUNT_ID, andANTHROPIC_API_KEYenvironment variables set
Full Code
import os
from keito import Keito
from keito.integrations.langchain import KeitoCallbackHandler
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Initialise Keito
keito = Keito()
# Create the callback handler
handler = KeitoCallbackHandler(
client=keito,
project_id="prj_abc",
task_id="tsk_research",
agent_id="research-agent-01",
)
# Build the chain
llm = ChatAnthropic(model="claude-sonnet-4-6", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a research analyst. Provide concise, factual summaries."),
("user", "Research and summarise: {topic}"),
])
chain = prompt | llm | StrOutputParser()
# Run with automatic tracking
result = chain.invoke(
{"topic": "AI agent billing best practices for consultancies"},
config={"callbacks": [handler]},
)
print(result)
How It Works
- Initialises the Keito client (reads API key from environment).
- Creates a
KeitoCallbackHandlerwith project, task, and agent details. - Builds a standard LangChain chain (prompt → LLM → output parser).
- Invokes the chain with the Keito handler as a callback.
- The handler automatically creates a time entry and LLM expense when the chain completes.
Customisation
- Change the model: Replace
claude-sonnet-4-6with any supported model. - Add tools: Convert to an agent with tools — the handler tracks all LLM calls within the agent run.
- Batch processing: Wrap in a loop to process multiple topics, each tracked separately.
Output in Keito
After running, you’ll see in the Keito web app:
- A time entry with a violet Agent badge, attributed to “research-agent-01”
- An LLM expense showing the token count and cost
- Both linked by a shared
session_idin their metadata