How We Paid 4 Months of Doc Debt in an Afternoon

This post covers how we refreshed four months of product documentation in an afternoon using Cursor with MCP integrations and a rules file that taught the agent how we write.
Half Your Docs Traffic is Already Agents
Last month, 48% of all docs visitors across Mintlify-powered sites were AI agents. That number grew from under 2% a year ago. Agents read your docs to make decisions on behalf of users.
Sequoia partner Sonya Huang calls this the shift from product-led growth to agent-led growth: "Your agent has infinite time to go and make these choices for you. It can go and read all the documentation, read all the user comments, and figure out what you need for your use case."
When Claude Code recommends Supabase for a database or Vercel for hosting, it's reading documentation and choosing based on what it finds. Outdated pages mislead the agents making recommendations on behalf of your potential users.
Our own research confirms the cost: pages not updated in over a year are more than 2x as likely to lose AI citations. Over 70% of pages cited by ChatGPT were updated within the past 12 months. Freshness is now a baseline requirement for visibility.
This raised the stakes for our own documentation. We'd shipped features for a few months without updating docs.airops.com to match. Pages had outdated screenshots and missing parameters. Entire features had no coverage. If an agent read our docs, it would form an incomplete picture of what AirOps can do.
The old approach to catching up: manually gather context from Linear, Slack and Notion, then update each page one by one over multiple days. We used Cursor, MCPs and our AirOps Brand Kit to compress that into an afternoon.
Updating our Docs for Agent Readers
We used Cursor with MCP (Model Context Protocol) integrations to pull context from Linear, Slack and Notion directly into the IDE. The agent queries shipped features and reads specs without us leaving the editor.
The second piece was an AGENTS.md file in the repo root. This file contains our style guide and detailed rules for eliminating AI writing patterns, informed by our Brand Kit from AirOps. The agent reads the rules file automatically and applies them to every edit.
How It Works
- Pull context via MCPs. The Linear MCP lets you query issues and project updates. The Slack MCP searches channels for feature discussions. The Notion MCP provides access to specs and internal docs. For example,"What features shipped in Grids projects since September?" returns Linear issues with descriptions and linked Notion pages.
- Review existing docs. With the feature context loaded, the agent reads current doc pages and identifies gaps: outdated screenshots, missing parameters.
- Generate updates. The agent drafts updates following
AGENTS.mdrules. It knows our structure and terminology. - Review diffs in Gitbook. Our docs sync to Gitbook via Git. Every change appears as a diff. We review proposed changes, approve or adjust, then merge.
What Made This Fast
- Single repo access. All docs live in one repository. The agent can read any page, understand cross-references, and maintain consistency across the entire docs site.
- MCP context injection. Instead of copy-pasting from Linear into prompts, the agent queries directly. A feature that shipped three months ago is just as accessible as one from last week.
- Iterative rule refinement. When output quality drifted, we added a rule. The fix applied to all future edits without re-prompting.
- Git-based review. Diffs showed exactly what changed. Accept or modify each change with full visibility into what the agent touched.
AGENTS.md is Key
Most AI writing sounds like AI writing: throat-clearing openers and formulaic structures repeat across outputs. We encoded anti-patterns directly into our rules file. We started with our AirOps Brand Kit, which already contained our voice guidelines and terminology, then expanded it with specific rules for the docs repo.
The file includes:
- Structure rules. Every page follows the same hierarchy: YAML frontmatter, single H1, introduction, then H2/H3 sections in a predictable order.
- Terminology enforcement. We use "Workflow" not "app," "Step" not "component," "Grid" not "spreadsheet." The agent follows these consistently.
- Anti-slop rules. A detailed section on phrases to remove ("Here's the thing:", "Let that sink in") and structures to avoid (binary contrasts, repetitive rhythms).
When we spotted a pattern we wanted to fix during the project, we added it to AGENTS.md. The next edit incorporated the new rule.
Constraints
- Screenshot automation is early. For pages needing screenshots, we experimented with the BrowserBase MCP, which uses computer use to navigate the product and capture screens. It handled most user flows surprisingly well. Complex interfaces like our Studio still needed manual screenshots. It worked for simple flows but anything requiring drag-and-drop or complex interactions needed manual capture.
- Human Review is essential. The agent gets terminology and structure right. It occasionally misses nuance or adds details that need verification. Git diffs make review fast, but you still need to read them.
Why This Matters
- Rules files scale. One person adds a rule, everyone benefits. The agent doesn't forget or have a bad day where it ignores the style guide. Consistency improves over time as the rules file grows.
- Context access is the bottleneck. Gathering what shipped and understanding the feature took most of the time. MCPs collapse the gathering phase, leaving more time for actual writing and review.
A semi-technical person with domain knowledge and access to these tools has significant leverage. This project took an afternoon. The old approach took multiple days.
What's Next
We're bringing AirOps context everywhere. Your Brand Kit and Knowledge Bases should be accessible to any tool you use, whether that's Cursor or your CMS. The patterns we tested here (rules files and context integrations) point toward that future. The AGENTS.md approach works because rules live where the work happens. We're exploring how to make this portable: your AirOps brain available in any environment where you create content.
How do I set up MCP integrations in Cursor for the first time?
Install the MCP server packages for your tools (Linear, Slack, Notion) via npm or the Cursor extension marketplace, then configure authentication tokens in your Cursor settings file. Each MCP requires API credentials from the respective service, which you'll add to the mcp.json configuration in your project root.
What should I include in an AGENTS.md file for technical documentation?
Include YAML frontmatter requirements, heading hierarchy rules, approved terminology lists, and explicit anti-patterns to avoid. Add examples of good and bad output, specify tone guidelines, and list any domain-specific conventions like code formatting or API reference structures.
How often should product documentation be updated to maintain AI visibility?
Update documentation within 30 days of shipping new features and audit existing pages quarterly. Research shows pages not updated within 12 months are significantly less likely to receive AI citations, making regular freshness checks essential for agent-led discovery.
Can AI agents accurately capture product screenshots for documentation?
AI agents using browser automation tools can capture straightforward UI flows and form interactions reliably. Complex interfaces requiring drag-and-drop, multi-step wizards, or canvas-based editors typically still require manual screenshot capture due to limitations in computer-use capabilities.
Win AI Search.
Increase brand visibility across AI search and Google with the only platform taking you from insights to action.
Get the latest on AI content & marketing
Get the latest in growth and AI workflows delivered to your inbox each week


