OpenAI
Adds instrumentation for the OpenAI SDK.
Import name: Sentry.openAIIntegration
The openAIIntegration adds instrumentation for the openai SDK to capture spans by wrapping OpenAI SDK calls and recording LLM interactions.
Enabled by default and automatically captures spans for OpenAI SDK calls. Requires Sentry SDK version 10.28.0 or higher.
To customize what data is captured (such as inputs and outputs), see the Options in the Configuration section.
Import name: Sentry.instrumentOpenAiClient
The instrumentOpenAiClient helper adds instrumentation for the openai SDK to capture spans by wrapping OpenAI SDK calls and recording LLM interactions with configurable input/output recording. You need to manually wrap your OpenAI client instance with this helper:
import OpenAI from "openai";
const openai = new OpenAI({
// Warning: API key will be exposed in browser!
apiKey: "your-api-key",
});
const client = Sentry.instrumentOpenAiClient(openai, {
recordInputs: true,
recordOutputs: true,
});
// Use the wrapped client instead of the original openai instance
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});
To customize what data is captured (such as inputs and outputs), see the Options in the Configuration section.
The following options control what data is captured from OpenAI SDK calls:
Type: boolean (optional)
Records inputs to OpenAI SDK calls (such as prompts and messages).
Defaults to true if sendDefaultPii is true.
Type: boolean (optional)
Records outputs from OpenAI SDK calls (such as generated text and responses).
Defaults to true if sendDefaultPii is true.
Usage
Using the openAIIntegration integration:
Sentry.init({
dsn: "____PUBLIC_DSN____",
// Tracing must be enabled for agent monitoring to work
tracesSampleRate: 1.0,
integrations: [
Sentry.openAIIntegration({
// your options here
}),
],
});
Using the instrumentOpenAiClient helper:
const client = Sentry.instrumentOpenAiClient(openai, {
// your options here
});
By default, tracing support is added to the following OpenAI SDK calls:
chat.completions.create()- Chat completion requestsresponses.create()- Response SDK requests
Streaming and non-streaming requests are automatically detected and handled appropriately.
When using OpenAI's streaming API, you must also pass stream_options: { include_usage: true } to receive token usage data. Without this option, OpenAI does not include prompt_tokens or completion_tokens in streamed responses, and Sentry will be unable to capture gen_ai.usage.input_tokens / gen_ai.usage.output_tokens on the resulting span. This is an OpenAI API behavior, not a Sentry limitation. See OpenAI docs on stream options.
openai:>=4.0.0 <7
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").