Python SDK: LangChain Integration
The Keito Python SDK includes a LangChain callback handler that automatically tracks time and token usage for every chain invocation.
Setup
from keito import Keito
from keito.integrations.langchain import KeitoCallbackHandler
client = Keito()
handler = KeitoCallbackHandler(
client=client,
project_id="prj_abc",
task_id="tsk_001",
agent_id="langchain-agent-01",
)
Usage
Pass the handler as a callback to any LangChain chain or agent:
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
llm = ChatAnthropic(model="claude-sonnet-4-6")
prompt = ChatPromptTemplate.from_template("Review this code: {code}")
chain = prompt | llm
# Automatically tracks time and token usage
result = chain.invoke(
{"code": code_to_review},
config={"callbacks": [handler]},
)
What Gets Tracked
For each chain invocation, the handler creates:
- A time entry with
source: "agent"and duration based on wall-clock time. - An LLM expense with token counts from the LangChain usage metadata.
Both entries share the same session_id in their metadata for correlation.
Configuration
| Option | Type | Default | Description |
|---|---|---|---|
auto_expense | bool | True | Automatically log LLM expenses |
min_duration | float | 0.01 | Minimum hours to record (skip very short calls) |
notes_template | str | None | Template for entry notes |
Multi-Step Agents
For LangChain agents with multiple tool calls, each step is tracked individually:
from langchain.agents import AgentExecutor, create_tool_calling_agent
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
result = executor.invoke(
{"input": "Analyse the Q4 sales data"},
config={"callbacks": [handler]},
)
The handler creates one time entry for the full agent run and separate LLM expenses for each LLM call within the run.
Related
- Agent Integration — manual integration patterns
- CrewAI Integration — CrewAI task callback