Mastra

Learn how to export Mastra AI tracing to Sentry.

Mastra is a framework for building AI-powered applications and agents with a modern TypeScript stack. The Mastra Sentry Exporter sends tracing data to Sentry using OpenTelemetry semantic conventions, providing insights into model performance, token usage, and tool executions.

Install the Mastra Sentry exporter package:

Copied
npm install @mastra/sentry@beta

The Sentry exporter can automatically read configuration from environment variables:

Copied
import { SentryExporter } from "@mastra/sentry";

// Reads from SENTRY_DSN, SENTRY_ENVIRONMENT, SENTRY_RELEASE
const exporter = new SentryExporter();

You can also configure the exporter explicitly:

Copied
import { SentryExporter } from "@mastra/sentry";

const exporter = new SentryExporter({
  dsn: process.env.SENTRY_DSN,
  environment: "production",
  tracesSampleRate: 1.0,
  release: "1.0.0",
});
Span Mapping

Mastra automatically maps its span types to Sentry operations for proper visualization in Sentry's AI monitoring dashboards:

Mastra Span TypeSentry Operation
AGENT_RUNgen_ai.invoke_agent
MODEL_GENERATIONgen_ai.chat
TOOL_CALLgen_ai.execute_tool
MCP_TOOL_CALLgen_ai.execute_tool
WORKFLOW_RUNworkflow.run
WORKFLOW_STEPworkflow.step
WORKFLOW_CONDITIONALworkflow.conditional
WORKFLOW_PARALLELworkflow.parallel
WORKFLOW_LOOPworkflow.loop
PROCESSOR_RUNai.processor
GENERICai.span

Note: MODEL_STEP and MODEL_CHUNK spans are automatically skipped to simplify trace hierarchy. Their data is aggregated into parent MODEL_GENERATION spans.

Captured Data

The Sentry exporter captures comprehensive trace data following OpenTelemetry semantic conventions:

  • sentry.origin: auto.ai.mastra (identifies spans from Mastra)
  • ai.span.type: Mastra span type

  • gen_ai.operation.name: Operation name (e.g., chat)
  • gen_ai.system: Model provider (e.g., OpenAI, Anthropic)
  • gen_ai.request.model: Model identifier
  • gen_ai.request.messages: Input messages/prompts (JSON)
  • gen_ai.response.model: Response model
  • gen_ai.response.text: Output text
  • gen_ai.response.tool_calls: Tool calls made during generation
  • gen_ai.usage.input_tokens: Input token count
  • gen_ai.usage.output_tokens: Output token count
  • gen_ai.usage.total_tokens: Total tokens used
  • gen_ai.request.stream: Whether streaming was used
  • gen_ai.request.temperature: Temperature parameter
  • gen_ai.completion_start_time: Time to first token

  • gen_ai.operation.name: execute_tool
  • gen_ai.tool.name: Tool identifier
  • gen_ai.tool.type: function
  • gen_ai.tool.call.id: Tool call ID
  • gen_ai.tool.input: Tool input parameters
  • gen_ai.tool.output: Tool output result
  • tool.success: Success flag

  • gen_ai.operation.name: invoke_agent
  • gen_ai.agent.name: Agent identifier
  • gen_ai.pipeline.name: Agent name
  • gen_ai.agent.instructions: Agent instructions/system prompt
  • gen_ai.response.model: Model from child generation
  • gen_ai.response.text: Output from child generation
  • gen_ai.usage.*: Token usage aggregated from child spans

Exports a tracing event to Sentry. Handles SPAN_STARTED, SPAN_UPDATED, and SPAN_ENDED events.

Copied
await exporter.exportTracingEvent(event);

Force flushes any pending spans to Sentry without shutting down the exporter. Waits up to 2 seconds for pending data to be sent. Useful in serverless environments where you need to ensure spans are exported before the runtime terminates.

Copied
await exporter.flush();

Ends all active spans, clears internal state, and closes the Sentry connection. Waits up to 2 seconds for pending data to be sent.

Copied
await exporter.shutdown();

For complete documentation on using Mastra with Sentry, see the Mastra Sentry Exporter documentation.

  • @mastra/sentry: >=1.0.0-beta.2
Was this helpful?
Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").