Writing for LLMs

Developers increasingly use AI assistants to help them integrate Sentry. We design our documentation to work well for both human readers and LLMs.

Our existing documentation principles naturally support LLM consumption:

  • Minimalist content helps LLMs extract relevant information without noise
  • Technical accuracy ensures LLMs provide correct guidance
  • Self-contained pages allow LLMs to answer questions from a single page fetch
  • Clear structure (headings, code blocks, lists) helps LLMs parse and understand content

Content negotiation: All pages are available as markdown. Requests with Accept: text/markdown, known AI user agents, or .md URLs receive markdown instead of HTML.

Copy page button: Every page includes a button to copy markdown to clipboard, with direct links to open the page in ChatGPT or Claude.

Hierarchical navigation: Index pages include "Pages in this section" listings so LLMs can discover related content.

Clean markdown output: We strip navigation chrome, convert internal links to .md format, and preserve code blocks with syntax highlighting hints.

When writing documentation, keep in mind:

  • Frontmatter descriptions appear in section listings and help LLMs understand page purpose
  • Code examples should be complete and runnable - LLMs often copy them directly
  • Avoid ambiguous references like "click the button above" that require visual context
  • Pages with noindex: true are excluded from search and LLM discovery
Was this helpful?
Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").