LangChain
Adds instrumentation for LangChain.
Import name: Sentry.langChainIntegration
The langChainIntegration adds instrumentation for langchain to capture spans by automatically wrapping LangChain operations and recording AI agent interactions with configurable input/output recording.
Enabled by default and automatically captures spans for LangChain SDK calls. Requires Sentry SDK version 10.28.0 or higher.
To customize what data is captured (such as inputs and outputs), see the Options in the Configuration section.
The following options control what data is captured from LangChain operations:
Type: boolean (optional)
Records inputs to LangChain operations (such as prompts and messages).
Defaults to true if sendDefaultPii is true.
Type: boolean (optional)
Records outputs from LangChain operations (such as generated text and responses).
Defaults to true if sendDefaultPii is true.
Usage
Using the langChainIntegration integration:
Sentry.init({
dsn: "____PUBLIC_DSN____",
// Tracing must be enabled for agent monitoring to work
tracesSampleRate: 1.0,
integrations: [
Sentry.langChainIntegration({
// your options here
}),
],
});
By default, tracing support is added to the following LangChain SDK calls:
- Chat model invocations - Captures spans for chat model calls
- LLM invocations - Captures spans for LLM pipeline executions
- Chain executions - Captures spans for chain invocations
- Tool executions - Captures spans for tool calls
The integration automatically instruments the following LangChain runnable methods:
invoke()- Single executionstream()- Streaming executionbatch()- Batch execution
The automatic instrumentation supports the following LangChain provider packages:
@langchain/anthropic@langchain/openai@langchain/google-genai@langchain/mistralai@langchain/google-vertexai@langchain/groq
langchain:>=0.1.0 <2.0.0
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").