How widespread is this?

The moment a customer tells you that ChatGPT is saying something wrong about your product, the instinct is to assume it is an isolated glitch. It almost never is. When Metricus runs AI visibility reports, we find factual errors in AI-generated responses for 72% of the brands we audit. These are not subtle differences in tone or positioning. They are demonstrably wrong facts: incorrect prices, features attributed to the wrong product tier, fabricated company details that never existed, and discontinued products described as currently available.

The scale is growing in lockstep with AI adoption. ChatGPT now has over 800 million weekly users. Over 60% of consumers express high trust in AI-generated results, according to a 2026 Boston Consulting Group study. 41% of consumers have purchased a product recommended by AI in the past six months. When AI gets your brand wrong, it is not a theoretical problem — it is a live channel feeding bad information to people who are ready to buy.

Why AI gets your brand wrong

AI models do not store verified facts in an editable database. They generate responses by pattern-matching across training data and web sources. That architecture creates three failure modes for brand information.

Conflicting sources. Your pricing page says one thing. A G2 review from 18 months ago says another. A comparison blog post says a third. AI has no reliable way to determine which source is authoritative, and it frequently picks the wrong one. LLMs cite Reddit and editorial sites for over 60% of brand information — not corporate websites.

Information gaps. When AI cannot find a specific fact about your company, it does not say "I don't know." It fills the gap with a plausible-sounding fabrication. This is how brands end up with invented founding dates, fabricated employee counts, and headquarters in cities they have never operated from.

Stale training data. AI models reflect your brand as it appeared months or years ago in their training corpus. If you have changed pricing, renamed a product, or undergone a rebrand, AI may still be serving the old version to every prospect who asks about you.

The error patterns we see most

  • Wrong pricing: AI cites prices from 12–24 months ago, typically sourced from cached G2 or Capterra listings. Pricing errors found on one AI platform exist on at least two others 60% of the time.
  • Feature conflation: AI merges features from different tiers or product lines into a single description, misrepresenting what each plan includes.
  • Fabricated details: AI invents founding dates, employee counts, or headquarters locations with no basis in any indexed source. These are pure hallucinations — confident and completely wrong.
  • Outdated information: AI recommends discontinued products, cites old partnerships, or describes deprecated features as current.
  • Competitive misattribution: AI attributes a competitor’s feature to your brand, or your feature to a competitor, typically sourced from comparison articles.

These error types cluster by platform. A Industry research of hundreds of millions of prompts found Google AI Overviews is 44% more likely to display negative brand information than ChatGPT. Google AI Overviews also carries an approximately 10% error rate across all queries. ChatGPT tends toward feature conflation and fabricated details because it relies heavily on training data. Perplexity, which uses more real-time web search, tends toward outdated pricing errors because it pulls from whatever stale sources rank well.

What's actually at stake

The business impact of AI misinformation is not abstract. 35% of brands report that inaccurate AI responses have already damaged their reputation. One documented case showed hallucinated product specs caused a 25% spike in product returns. When AI tells a prospect the wrong price, it creates mismatched expectations your sales team has to correct — or the prospect walks away without ever contacting you.

The compounding effect makes this worse over time. AI hallucinations about a brand tend to persist across model updates unless actively corrected. A fabricated detail embedded in one model’s training data can propagate to newer models, to AI-generated content on third-party sites, and from there back into future training data. Without intervention, hallucinations become self-reinforcing. 85% of brands now report experiencing AI-accelerated threats, including misinformation and misrepresentation.

And there is no correction portal. You cannot contact OpenAI, Google, or Anthropic and request that specific wrong information about your brand be fixed. AI models do not work that way. Correction requires identifying which errors exist, tracing them to their sources, and fixing those sources so that future model updates reflect accurate information.

What we found across audits

What we found: Companies with multiple product lines, frequent pricing changes, or recent rebrands have the highest hallucination rates. Brands with limited web presence outside their own domain have higher fabrication rates because AI fills information gaps with plausible-sounding but invented details. The error rate also increases with brand complexity — and the errors are rarely confined to a single platform.

The cross-platform pattern is one of the most important findings. When a pricing error appears on ChatGPT, it exists on at least two other AI platforms 60% of the time. This means the customer who told you ChatGPT got something wrong is probably only seeing part of the picture. Perplexity, Gemini, Claude, and Google AI Overviews may all be serving their own versions of misinformation about your brand, each sourced from different places, each wrong in different ways.

75% of marketing teams now use AI, according to Salesforce — yet most lack a formal process to verify what AI says about their own brand. The brands that discover the problem are the lucky ones. The ones who never check never know why their lead quality shifted or why prospects arrive with wrong expectations.

The case for auditing before acting

The instinct after that panicked customer call is to start fixing things immediately — update your website copy, rewrite your FAQ, publish a blog post correcting the record. That instinct is understandable but premature. Without knowing exactly what AI gets wrong about your brand, which platforms have which errors, and which sources feed the misinformation, you are guessing at solutions.

An AI visibility report gives you the complete picture: every factual error across every major AI platform, traced to its likely source. That is the prerequisite to any correction strategy that actually works — because the fix for a pricing error sourced from a stale G2 listing is completely different from the fix for a fabricated detail with no source at all.

Last updated: April 2026