Kafka Logs and Traces

Learn how to forward traces and logs from Kafka to Sentry via the OpenTelemetry Protocol (OTLP).

This guide shows you how to consume telemetry data (traces and logs) from Kafka topics and forward them to Sentry using the OpenTelemetry Collector with the Kafka Receiver.

The Kafka Receiver is useful when you have applications publishing OTLP-formatted telemetry data to Kafka topics, allowing the OpenTelemetry Collector to consume and forward this data to Sentry.

Before you begin, ensure you have:

  • A Kafka cluster with telemetry data being published to topics
  • Network access to your Kafka brokers
  • A Sentry project to send data to

The Kafka Receiver is included in both the OpenTelemetry Collector Core and Contrib distributions.

Download the latest binary from the OpenTelemetry Collector releases page.

You'll need your Sentry OTLP endpoint and authentication header. These can be found in your Sentry Project Settings under Client Keys (DSN) > OpenTelemetry (OTLP).

Copied
___OTLP_URL___

Copied
x-sentry-auth: sentry sentry_key=___PUBLIC_KEY___

Create a configuration file with the Kafka Receiver and the OTLP HTTP exporter configured to send telemetry to Sentry.

For additional configuration options, see the Kafka Receiver Documentation.

This configuration consumes both logs and traces from Kafka and forwards them to Sentry:

config.yaml
Copied
receivers:
  kafka:
    brokers:
      - localhost:9092
    logs:
      topic: otlp_logs
      encoding: otlp_proto
    traces:
      topic: otlp_spans
      encoding: otlp_proto

processors:
  batch:
    send_batch_size: 1024
    send_batch_max_size: 2048
    timeout: "1s"

exporters:
  otlphttp/sentry:
    endpoint: ___OTLP_URL___
    headers:
      x-sentry-auth: "sentry sentry_key=___PUBLIC_KEY___"
    compression: gzip
    encoding: proto

service:
  pipelines:
    logs:
      receivers:
        - kafka
      processors:
        - batch
      exporters:
        - otlphttp/sentry
    traces:
      receivers:
        - kafka
      processors:
        - batch
      exporters:
        - otlphttp/sentry

You can consume from multiple topics using regex patterns:

config.yaml
Copied
receivers:
  kafka:
    brokers:
      - localhost:9092
    logs:
      topic: "^logs-.*"
      exclude_topic: "^logs-(test|dev)$"
      encoding: otlp_proto
    traces:
      topic: "^traces-.*"
      encoding: otlp_proto

processors:
  batch:

exporters:
  otlphttp/sentry:
    endpoint: ___OTLP_URL___
    headers:
      x-sentry-auth: "sentry sentry_key=___PUBLIC_KEY___"
    compression: gzip
    encoding: proto

service:
  pipelines:
    logs:
      receivers:
        - kafka
      processors:
        - batch
      exporters:
        - otlphttp/sentry
    traces:
      receivers:
        - kafka
      processors:
        - batch
      exporters:
        - otlphttp/sentry

When traces are published to Kafka using the Kafka Exporter with include_metadata_keys configured, the Kafka Receiver automatically propagates Kafka message headers as request metadata throughout the pipeline. This preserves trace context information, allowing you to maintain distributed trace continuity across services that communicate via Kafka.

To extract specific headers and attach them as resource attributes, use the header_extraction configuration:

Copied
receivers:
  kafka:
    brokers:
      - localhost:9092
    traces:
      topic: otlp_spans
      encoding: otlp_proto
    header_extraction:
      extract_headers: true
      headers: ["traceparent", "tracestate"]

The Kafka Receiver supports various encodings for different signal types:

All signals (logs, traces):

  • otlp_proto (default): OTLP Protobuf format
  • otlp_json: OTLP JSON format

Traces only:

  • jaeger_proto: Jaeger Protobuf format
  • jaeger_json: Jaeger JSON format
  • zipkin_proto: Zipkin Protobuf format
  • zipkin_json: Zipkin JSON format

Logs only:

  • raw: Raw bytes as log body
  • text: Text decoded as log body
  • json: JSON decoded as log body

  • Verify the Kafka broker addresses are correct and accessible
  • Ensure the topic names match the topics where telemetry data is being published
  • Check that the encoding matches the format of data in your Kafka topics
  • If using authentication, verify your credentials and SASL mechanism
  • Confirm the consumer group has permissions to read from the configured topics

Was this helpful?
Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").