Your Clients Are Asking About AI
The question has changed. Twelve months ago, clients asked their agency about SEO rankings, paid search performance, and social media reach. Those conversations still happen. But now there's a new one: "Why doesn't ChatGPT recommend us?"
This is not a fringe concern from early adopters. It has become a standard client expectation. A growing share of B2B buyers now consult AI chatbots during purchasing research, and that number is accelerating. When a prospect asks ChatGPT "what's the best [category] for [use case]" and the client's brand is absent from the answer, they notice. And they ask their agency to explain why.
The challenge is that most agencies are not equipped to answer this question. Traditional SEO audits, rank tracking tools, and analytics platforms don't measure AI visibility. Google Search Console tells you nothing about how ChatGPT, Perplexity, or Gemini describe your client's brand. The data simply lives in a different system, with different inputs and different logic.
Agencies that can answer this question have a clear competitive advantage: they retain clients who would otherwise look elsewhere for AI expertise, and they open a new revenue line with an offering competitors haven't built yet. The question is how to deliver it without the overhead.
The Agency Problem
The obvious answer — build AI monitoring in-house — falls apart quickly when you look at the economics and technical requirements.
- Building in-house is expensive and technically complex. Monitoring AI visibility means running queries across multiple AI platforms, tracking which brands appear, identifying source citations, detecting factual errors, and doing this repeatedly over time. It requires API access (where available), browser automation (where it's not), natural language processing to interpret responses, and infrastructure to store and compare results. Most agencies don't have the engineering team to build this, and the ones that do have better uses for that team's time.
- Per-client dashboard tools get expensive fast. The emerging category of AI visibility tools — Otterly, Scrunch, and others — typically charge per-client monthly subscriptions. Otterly's Standard plan runs $189/month per client. Scrunch charges around $300/month per client. For an agency with 10 clients, that's $1,890–$3,000 per month in tooling costs alone, before you account for the analyst time to interpret the dashboards and turn them into client-facing deliverables.
- Clients don't want another dashboard. This is the part most tool vendors miss. Your clients don't want login credentials to yet another platform. They want a report: a document with findings, context, and recommendations they can act on. Dashboards create ongoing cost with ongoing obligation. Reports create a deliverable you can sell.
The agency model works on deliverables, not dashboards. You need data you can package, present, and bill for — not a subscription that drains margin every month.
AI Visibility Audit — What You Get
How Agencies Use Metricus
Metricus is built for the way agencies actually work: project-based, deliverable-focused, and cost-conscious per client.
Per-client reports, not per-client subscriptions
Buy one report per client audit. There is no monthly fee, no minimum commitment, and no unused subscription months when a client churns or pauses. You pay $99–$499 per client report depending on depth, and that report becomes your deliverable. Mark it up, present it in your own deck, and bill the client at your standard consulting rate.
Source attribution that drives recommendations
Every Metricus report shows exactly which third-party sources feed each AI platform's answer about your client's category. This is the piece that makes the report actionable: instead of telling a client "you're not visible in ChatGPT," you can tell them "ChatGPT is pulling from these three G2 comparison pages and this Reddit thread, none of which mention you, and here is how to fix that." Source attribution turns a vague problem into a specific optimization roadmap.
Action steps included
Each report includes a prioritized list of recommendations — fix this listing, add this schema, create this comparison page, update this vocabulary. The recommendations are specific enough to be your deliverable. Hand the report to the client as-is, or use it as the foundation for a broader engagement. Either way, you are not spending analyst hours figuring out what to recommend.
Re-audit as needed
Run a new report quarterly to show progress. Only pay when you need data. If a client implements your recommendations and wants to see the impact 60 days later, you buy another report, show the before-and-after, and demonstrate measurable improvement. No ongoing cost between audits.
Pricing That Works for Agency Economics
The economics of AI visibility tooling look very different depending on whether you're paying per month or per report. Here's how the numbers compare for a typical agency:
| Clients | Otterly (Standard) | Scrunch | Metricus (Deep Dive) |
|---|---|---|---|
| 5 clients | $945/mo ($11,340/yr) | $1,500/mo ($18,000/yr) | $1,495 one-time |
| 10 clients | $1,890/mo ($22,680/yr) | $3,000/mo ($36,000/yr) | $2,990 one-time |
| Quarterly re-audit (10 clients) | Same monthly cost | Same monthly cost | $11,960/yr |
The difference is structural. With subscription tools, you pay whether or not you're actively using the data. With Metricus, you pay when you need a deliverable. For agencies that run AI audits as a project-based service rather than an always-on monitoring offering, the per-report model preserves margin and eliminates waste. Run your own numbers in the margin calculator.
And the margin math works in your favor. A Deep Dive report costs $299. Bill it to your client as part of a $1,500–$3,000 AI visibility audit engagement — which includes your analysis, presentation, and strategic recommendations — and the tooling cost is a fraction of the deliverable value.
What's in an Agency Report
Every Metricus report is generated from real AI platform interactions — actual queries run through ChatGPT, Perplexity, Gemini, Claude, Grok, DeepSeek, Copilot, and AI Overviews. Not API approximations, not simulated responses. Real queries, real answers, real source citations. Here's what your client receives:
- Visibility score by query type. How often the brand appears across category queries, comparison queries, pricing queries, and use-case queries. Broken down by AI platform so you can see where the brand is strongest and weakest.
- Factual error audit with source tracing. Every incorrect piece of information AI states about the brand — wrong pricing, outdated features, inaccurate descriptions — with the specific source that fed the error. This turns a vague "AI gets us wrong" complaint into a list of fixable items.
- Competitor comparison. How the client's visibility stacks up against 3–5 named competitors across the same query set. Shows exactly who is winning the AI recommendation and why.
- Citation sources. The complete list of third-party sources each AI platform references when answering queries about the client's category. This is the roadmap: these are the pages you need to be on.
- Vocabulary gap analysis. Where the client's own terminology diverges from how buyers phrase their questions. Identifies specific language mismatches that cause the brand to be invisible to AI.
- Prioritized action plan. A ranked list of recommendations — from highest-impact quick fixes to longer-term content and technical optimization. Each action item includes the specific change, why it matters, and which AI platforms it affects.
The report is designed to be client-ready. You can present it as-is or incorporate it into a broader strategy deck. Either way, the analysis, findings, and recommendations are done — your job is to add the strategic context and relationship layer that makes you the agency.
Getting Started
The process is straightforward. Go to the get-report page, select the tier that matches the depth your client needs, and enter the client's brand and category details. Reports are delivered promptly after order.
For agencies running multiple client audits, the workflow is simple: order one report per client, receive the deliverables, present to each client, and re-order quarterly when it's time to show progress. No contracts, no onboarding calls, no platform training.
If you want to see the format and depth before ordering for a client, view a sample report. It shows exactly what your client will receive, including the source attribution, error audit, and action plan sections.
The agencies that move first on AI visibility reporting will own this service line while competitors are still figuring out how to answer the question. The tools exist. The client demand exists. The only variable is whether you offer it or someone else does.
Agency resources
- Agency resource hub — all tools and templates for agency AI visibility work
- White-label report deck — 10-slide presentation template for client meetings
- Agency margin calculator — project your earnings reselling AI audits
Further reading
- What is AI visibility? — share this guide with clients who need background context
- AI visibility tools compared — how Metricus stacks up against monitoring subscriptions
- The 5-step action plan — the framework behind the action steps in every report