Adding Custom Spans

Add custom instrumentation for visibility beyond auto-instrumentation and set up alerts.

You've got your Sentry SDK auto-instrumentation running. Now what?

Auto-instrumentation captures HTTP, database, and framework operations. But it can't see business logic, third-party APIs without auto-instrumentation, or background jobs. This guide shows you where to add custom spans to fill in those gaps.

Copied
Sentry.startSpan(
  { name: "operation-name", op: "category" },
  async (span) => {
    span.setAttribute("key", value);
    // ... your code ...
  },
);

Numeric attributes become metrics you can aggregate with sum(), avg(), p90() in Trace Explorer.

Start with these five areas and you'll have visibility into the operations that matter most.

Track the full journey through critical paths. When checkout is slow, you need to know which step is responsible.

Copied
Sentry.startSpan(
  { name: "checkout-flow", op: "user.action" },
  async (span) => {
    span.setAttribute("cart.itemCount", 3);
    span.setAttribute("user.tier", "premium");

    await validateCart();
    await processPayment();
    await createOrder();
  },
);

Query in Explore > Traces: span.op:user.action grouped by user.tier, visualize p90(span.duration).

Alert idea: p90(span.duration) > 10s for checkout flows.

Measure dependencies you don't control. They're often the source of slowdowns.

Copied
Sentry.startSpan(
  { name: "shipping-rates-api", op: "http.client" },
  async (span) => {
    span.setAttribute("http.url", "api.shipper.com/rates");
    span.setAttribute("request.itemCount", items.length);

    const start = Date.now();
    const response = await fetch("https://api.shipper.com/rates");

    span.setAttribute("http.status_code", response.status);
    span.setAttribute("response.timeMs", Date.now() - start);

    return response.json();
  },
);

Query in Explore > Traces: span.op:http.client response.timeMs:>2000 to find slow external calls.

Alert idea: p95(span.duration) > 3s where http.url contains your critical dependencies.

Auto-instrumentation catches queries, but custom spans let you add context that explains why a query matters.

Copied
Sentry.startSpan(
  { name: "load-user-dashboard", op: "db.query" },
  async (span) => {
    span.setAttribute("db.system", "postgres");
    span.setAttribute("query.type", "aggregation");
    span.setAttribute("query.dateRange", "30d");

    const results = await db.query(dashboardQuery);
    span.setAttribute("result.rowCount", results.length);

    return results;
  },
);

Why this matters: Without these attributes, you see "a database query took 2 seconds." With them, you know it was aggregating 30 days of data and returned 50,000 rows. That's actionable.

Query ideas in Explore > Traces:

  • "Which aggregation queries are slowest?" Group by query.type, sort by p90(span.duration)
  • "Does date range affect performance?" Filter by name, group by query.dateRange

Jobs run outside of request context. Custom spans make them visible.

Copied
async function processEmailDigest(job) {
  return Sentry.startSpan(
    { name: `job:${job.type}`, op: "queue.process" },
    async (span) => {
      span.setAttribute("job.id", job.id);
      span.setAttribute("job.type", "email-digest");
      span.setAttribute("queue.name", "notifications");

      const users = await getDigestRecipients();
      span.setAttribute("job.recipientCount", users.length);

      for (const user of users) {
        await sendDigest(user);
      }

      span.setAttribute("job.status", "completed");
    },
  );
}

Query in Explore > Traces: span.op:queue.process grouped by job.type, visualize p90(span.duration).

Alert idea: p90(span.duration) > 60s for queue processing.

For AI workloads, use Sentry Agent Monitoring instead of manual instrumentation when possible. It automatically captures agent workflows, tool calls, and token usage.

If you're not using a supported framework or need custom attributes:

Copied
Sentry.startSpan(
  { name: "generate-summary", op: "ai.inference" },
  async (span) => {
    span.setAttribute("ai.model", "gpt-4");
    span.setAttribute("ai.feature", "document-summary");

    const response = await openai.chat.completions.create({...});

    span.setAttribute("ai.tokens.total", response.usage.total_tokens);
    return response;
  }
);

Alert idea: p95(span.duration) > 5s for AI inference.

Categoryop ValueExample Attributes
User flowsuser.actioncart.itemCount, user.tier
External APIshttp.clienthttp.url, response.timeMs
Databasedb.queryquery.type, result.rowCount
Background jobsqueue.processjob.type, job.id, queue.name
AI/LLMai.inferenceai.model, ai.tokens.total

Explore the Trace Explorer product walkthrough guides to learn more about the Sentry interface and discover additional tips.

Was this helpful?
Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").