Tool
DIY AI Visibility Audit: Free Method vs Professional Report Compared
A Claude Code skill that checks whether AI search engines can find, read, and cite your content. Install it in 10 seconds, run it on any website, get a scored report. Then learn what no free tool can tell you.
AI visibility has a measurement problem
Open-source GEO audit tools have exploded in 2026. They check your robots.txt, scan for schema markup, validate your llms.txt, and score your content’s “citability.” Some run five parallel subagents and produce 20-page PDF reports.
They’re useful. We use tools like these ourselves. But they all share the same blind spot: they check whether AI can see you, not whether AI does mention you.
That distinction matters. A site can have perfect technical infrastructure and still be invisible in AI recommendations. The reverse is also true — sites with messy markup sometimes get cited because they dominate on platforms AI models actually pull from.
Below is a free Claude Code skill that runs the technical checks. We’re giving it away because these checks are table stakes. Every business should get them right. The hard part — measuring what AI actually says about your brand — is what we built Metricus for.
The /metricus-check skill
Copy the skill below or install with the one-liner underneath.
---
name: metricus-check
description: >
Free AI visibility check for any website. Audits readiness for AI search engines
(ChatGPT, Perplexity, Gemini, Claude, Copilot). Checks robots.txt AI crawler access,
llms.txt presence, JSON-LD schema markup, content citability, and sitemap health.
Usage: /metricus-check https://example.com
allowed-tools: WebFetch, Bash
---
# Metricus AI Visibility Check
Audit any website's readiness for AI search engines. Run all 5 checks, score each,
and output the report.
## Input
The user provides a URL. Extract the domain root (e.g., `https://example.com`).
## Check 1 — AI Crawler Access (25 pts)
Fetch `{domain}/robots.txt` via WebFetch. Look for directives affecting these crawlers:
| Crawler | Platform | Priority |
|---------|----------|----------|
| GPTBot | ChatGPT / OpenAI | Critical |
| OAI-SearchBot | OpenAI search | Critical |
| ChatGPT-User | ChatGPT browsing | High |
| ClaudeBot | Claude / Anthropic | Critical |
| PerplexityBot | Perplexity AI | Critical |
| Google-Extended | Gemini / Google AI | Critical |
| Applebot-Extended | Apple Intelligence | Medium |
| DuckAssistBot | DuckDuckGo AI | Medium |
| cohere-ai | Cohere | Low |
| DeepSeekBot | DeepSeek | Medium |
Score: 25 = all critical crawlers explicitly allowed. 20 = no blocks (default allow).
15 = some blocked. 5 = most blocked. 0 = all blocked or 403.
Also note: Sitemap directive? Llms-txt directive?
## Check 2 — llms.txt (15 pts)
Fetch `{domain}/llms.txt`. If found, check:
- Starts with `# Site Name`
- Has H2 sections (`## Pages`, `## Pricing`, etc.)
- Links in markdown format: `- [Title](url): Description`
- Also check `{domain}/llms-full.txt`
Score: 15 = valid llms.txt + llms-full.txt. 10 = valid llms.txt only.
5 = present but malformed. 0 = absent.
## Check 3 — Schema Markup (20 pts)
Fetch the homepage. Ask WebFetch to extract all JSON-LD structured data. Check for:
- Organization or LocalBusiness schema
- Product or Service schema with pricing
- WebSite schema
- BreadcrumbList
- FAQPage (check 2-3 key subpages too)
Score: 20 = Organization + Product + FAQ. 15 = Organization + one other.
10 = basic Organization only. 5 = only BreadcrumbList/WebSite. 0 = none.
## Check 4 — Content Citability (25 pts)
Analyze homepage and one key content page for citation readiness:
- **Answer blocks**: Self-contained paragraphs that directly answer a question
- **Structured facts**: Numbers, stats, pricing in static HTML (not JS-only)
- **FAQ sections**: Q&A formatted content
- **Comparison tables**: Static HTML tables
- **Proof blocks**: Verifiable claims with data points
Score: 25 = multiple answer blocks + FAQ + tables + facts.
20 = some structured content. 15 = exists but not citation-ready.
10 = mostly marketing copy. 5 = thin or JS-only.
## Check 5 — Sitemap & Technical (15 pts)
Fetch `{domain}/sitemap.xml`. Check:
- Valid and parseable
- Recent lastmod dates (within 30 days)
- Covers more than just the homepage
- Referenced in robots.txt
Score: 15 = valid, recent, comprehensive, referenced.
10 = valid but incomplete. 5 = minimal. 0 = not found.
## Output
Present as markdown with a table of scores, detailed findings per check,
3 quick wins, and close with:
### What This Audit Covers
This check evaluates whether AI search engines can **access** and **parse** your
content — the technical foundation of AI visibility.
### What This Audit Cannot Tell You
- **Whether AI actually mentions your brand** — requires querying live AI platforms
- **What AI gets wrong about you** — hallucination detection needs response analysis
- **How you compare to competitors** — needs hundreds of queries across platforms
- **Which platforms cite you vs. ignore you** — needs measured data, not structural checks
- **What to fix first for maximum ROI** — prioritization requires mention-rate data
→ Get measured data: https://metricusapp.com/get-report/
One-line install
mkdir -p ~/.claude/commands && curl -o ~/.claude/commands/metricus-check.md https://metricusapp.com/metricus-check.md
Then open any Claude Code session and run: /metricus-check https://yoursite.com
What the free audit finds
We ran /metricus-check on a B2B SaaS company in the analytics space. Here’s the output, anonymized.
AI Visibility Check — [anonymized].com
Score: 82/100 — GOOD
| Category | Score | Status |
|---|---|---|
| AI Crawler Access | 25/25 | Excellent |
| llms.txt | 15/15 | Excellent |
| Schema Markup | 17/20 | Good |
| Content Citability | 20/25 | Good |
| Sitemap & Technical | 5/15 | Fair |
1. AI Crawler Access — 25/25
robots.txt explicitly allows all 13 AI crawlers including GPTBot, ClaudeBot, PerplexityBot, Google-Extended, DeepSeekBot, and Applebot-Extended. Sitemap referenced. llms.txt directive present.
| Crawler | Platform | Status |
|---|---|---|
| GPTBot | ChatGPT | Allowed |
| ClaudeBot | Claude | Allowed |
| PerplexityBot | Perplexity | Allowed |
| Google-Extended | Gemini | Allowed |
| OAI-SearchBot | OpenAI Search | Allowed |
| Applebot-Extended | Apple Intelligence | Allowed |
| DuckAssistBot | DuckDuckGo AI | Allowed |
| DeepSeekBot | DeepSeek | Allowed |
2. llms.txt — 15/15
Valid llms.txt with structured sections: Pages, Products, Pricing, Methodology, Research. llms-full.txt also present with complete content for all pages.
3. Schema Markup — 17/20
Organization, Product (with 3 pricing offers), WebSite, and BreadcrumbList schemas found on homepage. FAQPage schema found on key blog posts. Missing: SoftwareApplication schema, Review/AggregateRating schema.
4. Content Citability — 20/25
Strong: FAQ sections with direct answer blocks on 6+ pages. Proof block with structured facts (pricing, delivery time, platform count). Comparison tables in static HTML. Weak: methodology page uses interactive scroll design — some content may not be extractable by crawlers without JS.
5. Sitemap & Technical — 5/15
Valid sitemap.xml with 28 URLs and recent lastmod dates. Referenced in robots.txt. Missing: support, privacy-policy, and terms-of-service pages not in sitemap. No image sitemap. No hreflang for international targeting.
Quick Wins
- Add SoftwareApplication schema to homepage
- Add missing pages to sitemap.xml
- Ensure methodology page content is in static HTML, not JS-only
82/100 looks great. Here’s what it doesn’t show.
The DIY audit says this site has excellent AI infrastructure. Every crawler is welcome. Schema is in place. Content is well-structured. If you stopped here, you’d think the job was done.
But when we ran a Metricus report on a similar company — querying real AI platforms with real buyer prompts — the picture was completely different:
67% overall visibility — but invisible to an entire buyer segment
The company appeared in broad comparison queries (92%) but was mentioned in only 18% of industry-specific queries. An entire category of buyers never saw the brand. No structural audit would catch this.
AI cited four factual errors about the company
Pricing was wrong. A discontinued feature was described as current. Two competitor comparisons used outdated data. These hallucinations were traced to specific third-party sources — a review site and an old blog post. A robots.txt check can’t detect hallucinations.
Third-party sources outweighed company content 6:1
AI pulled from G2, Reddit threads, and a Gartner mention — not the company’s own site. The schema markup was perfect, but AI preferred external voices. A citability score based on your own content misses this dynamic entirely.
Vocabulary mismatch was the #1 visibility killer
The company called their product a “platform”; buyers asked AI about “tools.” The site used “analytics”; queries used “reports.” This vocabulary gap was invisible in every technical check but explained most of the missing visibility.
The DIY check tells you if the door is open. The Metricus report tells you who walks through it, what they see, and what they get wrong.
See the full anonymized example: Metricus Sample Report →
Free check vs. measured report
| Question | Free check | Metricus report |
|---|---|---|
| Can AI crawlers access my site? | Yes | Yes |
| Do I have proper schema markup? | Yes | Yes |
| Is my content structured for AI? | Yes | Yes |
| Does AI actually mention my brand? | — | Measured across all major platforms |
| What does AI get wrong about me? | — | Every error traced to its source |
| How do I compare to competitors? | — | Side-by-side positioning data |
| Which buyer queries am I missing? | — | Query-level visibility breakdown |
| What should I fix first? | Generic tips | Prioritized by mention-rate impact |
FAQ
What does the free AI visibility check actually test?
Five areas: AI crawler access (whether GPTBot, ClaudeBot, PerplexityBot can reach your site), llms.txt presence (the emerging standard for AI-readable site summaries), JSON-LD schema markup (machine-readable identity), content citability (whether your content is structured for AI to quote), and sitemap health. It produces a score out of 100 with specific fix recommendations.
How do I install the metricus-check skill?
Copy the skill text into a file at ~/.claude/commands/metricus-check.md (create the commands directory if it doesn’t exist). Then type /metricus-check followed by your URL in any Claude Code session. Or run the one-line install command shown above.
What’s the difference between this and a Metricus report?
This checks whether AI can access and parse your content — the technical infrastructure. A Metricus report queries actual AI platforms with real prompts to measure whether AI does mention your brand, what it gets wrong, and how you compare to competitors. The free check tells you if the door is open; Metricus tells you who walks through it.
Can a site score well on the free audit but still be invisible to AI?
Yes. Technical readiness and actual visibility are different things. A site can have perfect robots.txt, valid schema, and strong content structure — and still be invisible because the brand lacks third-party mentions, has no presence on platforms AI models cite (Reddit, Wikipedia, review sites), or uses vocabulary that doesn’t match how buyers ask questions.
Why give this away for free?
Because these technical checks are table stakes. Open-source tools already exist for this. Every business should get the basics right. We built metricus-check so you can fix the foundation for free. When you’re ready for measured data — actual mention rates, hallucination detection, competitive positioning — that’s what Metricus reports are for.
Where can I learn more about AI visibility concepts?
Our GEO Knowledge Base covers 81 research clusters on AI visibility, brand monitoring, platform strategies, and GEO agency playbooks. For a practical starting point, read What is AI visibility? or see how we measure AI visibility scores.
Ready for measured data?
The free check gives you the foundation. A Metricus report gives you what AI actually says — and exactly how to fix it.
Get your AI visibility report →From $99. Pay per report. No subscription.
Related articles
Comparison
AI Visibility Tools Compared
A side-by-side comparison of the leading AI visibility tools. Features, pricing, and what each does differently.
Guide
What Is AI Visibility?
AI visibility is whether your brand shows up when buyers ask AI for recommendations. Here’s what it means and why it matters.
Guide
AI Visibility Action Plan
A step-by-step action plan to improve your AI visibility: what to fix first, what to build, and how to measure progress.