OpenAI
Adds instrumentation for the OpenAI SDK.
Import name: Sentry.openAIIntegration
The openAIIntegration adds instrumentation for the openai SDK to capture spans by wrapping OpenAI SDK calls and recording LLM interactions.
Enabled by default and automatically captures spans for OpenAI SDK calls. Requires Sentry SDK version 10.28.0 or higher.
To customize what data is captured (such as inputs and outputs), see the Options in the Configuration section.
The following options control what data is captured from OpenAI SDK calls:
Type: boolean (optional)
Records inputs to OpenAI SDK calls (such as prompts and messages).
Defaults to true if sendDefaultPii is true.
Type: boolean (optional)
Records outputs from OpenAI SDK calls (such as generated text and responses).
Defaults to true if sendDefaultPii is true.
Usage
Using the openAIIntegration integration:
Sentry.init({
dsn: "____PUBLIC_DSN____",
// Tracing must be enabled for agent monitoring to work
tracesSampleRate: 1.0,
integrations: [
Sentry.openAIIntegration({
// your options here
}),
],
});
By default, tracing support is added to the following OpenAI SDK calls:
chat.completions.create()- Chat completion requestsresponses.create()- Response SDK requests
Streaming and non-streaming requests are automatically detected and handled appropriately.
When using OpenAI's streaming API, you must also pass stream_options: { include_usage: true } to receive token usage data. Without this option, OpenAI does not include prompt_tokens or completion_tokens in streamed responses, and Sentry will be unable to capture gen_ai.usage.input_tokens / gen_ai.usage.output_tokens on the resulting span. This is an OpenAI API behavior, not a Sentry limitation. See OpenAI docs on stream options.
openai:>=4.0.0 <7
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").