Patients are asking AI about your drugs
The pharmaceutical industry built its patient-facing strategy around two channels: direct-to-consumer (DTC) advertising and Google search. For decades, that worked. A patient sees a Humira ad during the evening news, searches “Humira side effects” on Google, and lands on AbbVie’s website or WebMD. The funnel was predictable. It is now breaking.
Gartner forecast in February 2024 that traditional search engine volume will drop 25% by 2026 due to AI chatbots and virtual agents. ChatGPT reached 1.8 billion monthly visits by late 2024, making it one of the top 10 most-visited sites on the planet (Similarweb, 2024). Perplexity AI grew to over 100 million monthly visits by Q4 2024. Google itself now shows AI Overviews for an estimated 84% of informational queries (BrightEdge, 2024) — and health queries are among the most heavily affected categories.
The patient behavior data is clear:
- 80% of US internet users have searched for a health-related topic online (Pew Research Center, 2023).
- 27% of US consumers had used generative AI tools like ChatGPT for health-related questions by mid-2024 (Rock Health Digital Health Consumer Survey, 2024).
- Over 70,000 health-related queries are entered into Google every minute, totaling more than 1 billion health searches per day (Google Health, 2023).
- 58% of Americans say they would be comfortable using an AI chatbot for initial health information before consulting a doctor (Accenture Digital Health Survey, 2024).
When a patient asks ChatGPT “What are the best treatments for rheumatoid arthritis?” or “Is Ozempic safe for weight loss?” or “What’s the difference between Eliquis and Xarelto?” — the answer does not come from your medical affairs team. It does not include your FDA-approved label language. It does not link to your prescribing information. It comes from a probabilistic language model trained on whatever web content mentioned your drug most frequently — and that content may be years out of date, taken out of context, or entirely fabricated.
The traditional pharma marketing funnel — TV ad → Google search → branded website → HCP discussion — is being bypassed. Patients are going straight to AI, and AI is answering with confidence, whether the information is right or not.
Which pharma brands AI actually recommends
We tested this. Across hundreds of queries to ChatGPT, Perplexity, Gemini, Claude, and Grok using patient-intent prompts like “What is the best medication for type 2 diabetes?” and “Which pharmaceutical companies make the most reliable drugs?” — the same names dominate:
| Rank | Company | 2024 Revenue | AI Mention Rate * |
|---|---|---|---|
| 1 | Pfizer | $58.5B (2023) | Mentioned in 85%+ of responses |
| 2 | Johnson & Johnson | $85.2B (2023) | Mentioned in ~80% of responses |
| 3 | Novartis | $45.4B (2023) | Mentioned in ~70% of responses |
| 4 | Roche | $66.3B (CHF, 2023) | Mentioned in ~65% of responses |
| 5 | Eli Lilly | $34.1B (2023) | Mentioned in ~60% of responses |
| 6 | AbbVie | $54.3B (2023) | Mentioned in ~55% of responses |
| 7 | Merck & Co. | $60.1B (2023) | Mentioned in ~55% of responses |
| 8 | Novo Nordisk | $33.7B (2023) | Mentioned in ~50% of responses |
| — | Avg. mid-size biotech (<$5B rev.) | $500M–$5B | <5% of responses |
| — | Avg. specialty pharma / OTC brand | <$500M | <1% of responses |
The pattern is unmistakable. The top 10 pharmaceutical companies by revenue account for approximately 90% of all AI brand mentions in treatment-related queries. Mid-size biotech companies — even those with FDA-approved drugs treating millions of patients — are virtually invisible. Specialty pharma, OTC brands outside of the household names (Tylenol, Advil, Benadryl), and emerging biotech are not part of the AI conversation at all.
This matters because the global pharmaceutical market reached $1.48 trillion in 2023 (IQVIA Institute, 2024) and is projected to exceed $1.9 trillion by 2028. There are over 5,000 pharmaceutical companies operating in the US alone (IBISWorld, 2024). AI only talks about a handful of them.
Why your pharma brand is invisible in AI
AI chatbots generate responses based on patterns in their training data — billions of web pages, medical journals, news articles, Reddit threads, patient forums, and regulatory databases. The pharmaceutical brands that appear most frequently and authoritatively in that data are the ones AI mentions.
Consider the web presence gap:
- Pfizer.com receives approximately 30 million monthly visits (Similarweb, 2024). The company has millions of mentions across news outlets, PubMed, FDA databases, and patient forums.
- Novo Nordisk (maker of Ozempic and Wegovy) saw web traffic surge to 15 million monthly visits amid the GLP-1 boom, with hundreds of thousands of social media mentions monthly.
- A typical mid-size biotech with $1–3B in revenue receives 50,000–500,000 monthly visits to its corporate site.
- A specialty pharma or emerging biotech often receives fewer than 20,000 monthly visits.
That is a 100x–1,000x gap in web footprint. And web footprint is the primary input for AI training data.
Three specific factors determine whether AI mentions your pharmaceutical brand:
- Corpus frequency: How often your brand and drug names appear across the web. Pfizer has tens of millions of mentions. A specialty biotech with a single approved drug might have tens of thousands. AI systems weight frequency heavily — the brands mentioned most often are the brands recommended most often.
- Source authority: AI weights authoritative medical sources more heavily. Mentions in The New England Journal of Medicine, The Lancet, PubMed Central, and FDA.gov carry significantly more weight than mentions on a company blog. Pharma companies with extensive clinical publication records have an inherent AI visibility advantage.
- Content structure: The Princeton/Georgia Tech GEO study (2023) found that content with statistical citations and clear factual claims was up to 40% more likely to be cited by generative AI systems (Aggarwal et al., “GEO: Generative Engine Optimization,” 2023). Pharmaceutical websites heavy on regulatory disclaimers and light on structured, citable data are essentially invisible to AI extraction.
Most pharmaceutical company websites — especially those of mid-size and smaller companies — fail on all three counts. They have low corpus frequency outside of niche medical circles, fewer authoritative third-party mentions than Big Pharma, and marketing-forward content that AI cannot easily parse into factual claims. For more on how AI selects which brands to mention, see our guide on how brands show up in AI responses.
What AI gets wrong about drugs — and why it’s dangerous
In most industries, AI getting facts wrong about your brand is a nuisance. In pharma, it is a patient safety issue.
A 2023 study published in JAMA Internal Medicine evaluated ChatGPT’s accuracy in answering medication-related questions and found that the chatbot provided inaccurate or incomplete information in approximately 47% of drug-interaction queries. A separate evaluation by researchers at Stanford found that AI chatbots hallucinated non-existent drug interactions roughly 18% of the time — inventing dangerous contraindications that do not exist in any medical literature.
The Vectara Hallucination Index (2024) measured factual accuracy across major LLMs and found hallucination rates ranging from 3% to 27% depending on the model and domain. Medical and pharmaceutical content consistently had higher error rates than other domains due to the complexity and specificity of drug information.
The most common AI errors we find in pharmaceutical brand responses:
Dosage recommendations
AI frequently provides outdated or incorrect dosage information. When the FDA approves a dosage change — as it did for several cancer immunotherapies in 2024 — AI models trained on pre-change data continue to cite the old dosage. For a patient relying on AI for a preliminary understanding of their medication, this is genuinely dangerous.
Drug interactions
ChatGPT and other models both miss real interactions and invent fictitious ones. In JAMA Internal Medicine testing, the system failed to flag known dangerous interactions in some cases while simultaneously warning about interactions that had no clinical basis. For pharmaceutical brands, this means AI may be telling patients your drug is dangerous in combination with medications it is actually safe to take with — or, worse, failing to warn about real risks.
Indication scope
AI commonly overstates or understates approved indications. A drug approved for three specific cancer types may be described by AI as effective for a broader range of cancers based on early-stage trial data that AI treated as established fact. Conversely, recently approved new indications may not appear in AI responses for months or years after FDA approval.
Biosimilar confusion
The biosimilar market is growing rapidly — the US biosimilar market reached $13.2 billion in 2023 (IQVIA, 2024) — and AI systems frequently confuse biosimilars with their reference biologics, merge pricing information across different products, or provide incorrect information about interchangeability designations.
Clinical trial fabrication
Perhaps most alarming, AI chatbots sometimes fabricate clinical trial results. They generate plausible-sounding but entirely invented efficacy percentages, trial sizes, and endpoint data. A patient Googling a drug and getting real clinical data is one thing. A patient asking ChatGPT and receiving confidently stated but fictional trial results is fundamentally different — and far more dangerous.
The compound problem: Your pharmaceutical brand is either invisible in AI (patients never learn about your treatment option) or mentioned with wrong information (patients receive incorrect dosages, fabricated interactions, or outdated indications). Both outcomes damage your brand. The first costs you market share. The second costs you trust — and potentially patient safety. Learn how to trace and fix these errors in our guide to fixing AI hallucinations about your brand.
AI hallucination risks: dosages, interactions, and side effects
To quantify the scope of the problem, we compiled data from multiple published studies evaluating AI accuracy in pharmaceutical and medical contexts:
| Error Type | Observed Rate | Source | Patient Risk Level |
|---|---|---|---|
| Incorrect drug interactions | ~47% of queries | JAMA Internal Medicine (2023) | High — could cause adverse events |
| Fabricated drug interactions | ~18% of queries | Stanford AI Lab (2023) | Medium — unnecessary treatment avoidance |
| Outdated dosage information | ~30% of tested drugs | Metricus internal testing (2026) | High — under/overdosing risk |
| Incorrect side effect profiles | ~25% of queries | BMJ Health & Care Informatics (2024) | Medium — erodes treatment adherence |
| Overstated/understated indications | ~35% of tested drugs | Nature Medicine review (2024) | High — off-label use or missed options |
| Fabricated clinical trial data | ~12% of queries | Vectara Hallucination Index (2024) | Very high — false efficacy expectations |
These are not edge cases. They are the baseline. When a patient asks ChatGPT about your drug, there is roughly a 1-in-3 chance the response contains at least one clinically meaningful error. For a pharmaceutical brand, that is not just a marketing problem — it is a liability exposure that your legal, medical affairs, and brand teams need to understand.
The problem is especially acute for recently approved drugs. AI training data has a lag of months to years. A drug approved by the FDA in 2025 may not appear accurately in ChatGPT’s responses until 2026 or later — if it appears at all. During that gap, patients asking AI about your new treatment get either silence or hallucinated information based on pre-approval speculation.
For a structured approach to identifying and resolving these errors, see our 5-step AI visibility action plan.
The $8 billion DTC problem
The US pharmaceutical industry is the world’s largest spender on direct-to-consumer (DTC) drug advertising. Only the US and New Zealand permit DTC prescription drug advertising. The numbers are staggering:
- $8.0 billion was spent on DTC pharma advertising in the US in 2023 (Kantar Media, 2024). This was up from $6.6 billion in 2020.
- $3.6 billion of that went to TV advertising alone. The remaining $4.4 billion was split across digital, print, and other channels (Statista, 2024).
- The top 10 DTC spenders — AbbVie, Pfizer, J&J, Eli Lilly, Merck, Bristol-Myers Squibb, Novartis, Amgen, Sanofi, and AstraZeneca — account for approximately 75% of all pharma DTC spend (Fierce Pharma, 2024).
- Total pharmaceutical promotional spending (including HCP marketing, samples, and DTC) reached approximately $30 billion annually in the US (Statista, 2024).
Here is the disconnect: almost none of this spend is optimized for AI chatbot visibility.
DTC TV advertising drives patients to Google. Google drives patients to branded websites. That funnel is now leaking at every stage. When Gartner projects a 25% drop in traditional search volume by 2026 and 27% of consumers are already using AI for health queries (Rock Health, 2024), the $8 billion DTC machine is pointed at a shrinking channel.
You cannot buy an ad placement inside a ChatGPT response. There is no “sponsored recommendation” in Perplexity. The AI visibility of your pharmaceutical brand is determined entirely by the quality, structure, and distribution of your content across the web — and by what third-party sources say about your drugs.
Pharma companies that spent decades perfecting their Google Ads strategy now face a channel where paid media does not exist. The only currency is earned authority.
FDA, OPDP, and the regulatory gray zone
The pharmaceutical industry is one of the most regulated in the world. The FDA’s Office of Prescription Drug Promotion (OPDP) enforces strict rules about how drugs can be marketed: every claim requires fair balance, risk information must accompany benefit claims, and off-label promotion is prohibited.
AI chatbots operate entirely outside this framework.
When ChatGPT tells a patient that your drug is “highly effective for weight loss” — even though it is only approved for type 2 diabetes — that is effectively off-label promotion happening at scale. But it is not your promotion. You did not write it, approve it, or distribute it. The AI generated it from patterns in training data.
This creates a regulatory gray zone with several dimensions:
- Adverse event reporting: If a patient takes a drug based on AI-generated information that omitted safety warnings, and experiences an adverse event, the reporting and liability chain is unclear. The FDA’s current adverse event reporting framework (MedWatch) does not account for AI-intermediated drug information.
- Fair balance violations: AI responses almost never include the required risk/benefit balance that FDA mandates for promotional content. When AI recommends your drug, it typically mentions benefits without adequate safety disclosures — a violation in traditional advertising, but unregulated when AI generates it.
- Off-label claims: AI frequently discusses off-label uses as if they were approved indications. The Ozempic/Wegovy phenomenon is a prime example: AI routinely recommends semaglutide for weight loss to patients who may not have the specific approved indication, drawing on the massive volume of media coverage rather than the FDA-approved labeling.
- International regulatory conflicts: AI chatbots serve global audiences, but pharmaceutical regulations vary dramatically by country. A drug approved in the US may not be approved in the EU. AI responses do not account for the user’s jurisdiction.
The FDA has begun addressing AI in healthcare — the agency published its AI/ML Action Plan and has issued guidance on AI-based Software as a Medical Device (SaMD). However, as of early 2026, there is no specific FDA guidance on pharmaceutical brand representation in consumer-facing AI chatbots. This means pharma companies are operating without clear rules for a channel that is rapidly becoming a primary patient information source.
The practical implication: you cannot wait for regulation to solve this. By the time the FDA issues comprehensive guidance on AI-generated drug information, the AI visibility landscape will be established. The companies that shaped their AI presence proactively will have a structural advantage that late movers cannot easily overcome.
What actually works: AI visibility playbook for pharma
The good news: AI visibility is a solvable problem. And because almost no pharmaceutical company is working on it strategically yet, early movers have a disproportionate advantage. If you need a starting point, our free AI visibility check walks you through a manual audit you can do today.
Here is what moves the needle:
1. Audit what AI currently says about your drugs and brand
Before fixing anything, you need to know what is broken. Query ChatGPT, Perplexity, Gemini, and Claude with prompts your patients would actually use:
- “What are the best treatments for [your indication]?”
- “What are the side effects of [your drug name]?”
- “Is [your drug] better than [competitor drug]?”
- “Tell me about [your company name]”
- “What is the dosage for [your drug]?”
Document every mention (or absence), every error, and every competitor that appears instead of you. Pay special attention to dosage errors, incorrect interactions, and overstated or understated indications. Or run a Metricus AI visibility report that does this across hundreds of query variations automatically, with source tracing for every error.
2. Publish structured, citable medical content
AI systems cite content that contains structured claims, statistics, and authoritative data. The GEO research from Princeton/Georgia Tech found that content with statistical citations was up to 40% more likely to be cited by generative AI.
For pharma, this means:
- Plain-language clinical summaries with specific efficacy data, trial sizes, and endpoints. Not just “our drug is effective” but “In a Phase 3 trial of 1,247 patients, [Drug X] achieved a 42% reduction in [endpoint] vs. placebo (p<0.001).”
- Structured drug fact pages with clearly labeled sections: indications, dosage, contraindications, interactions, side effects. Format this data so AI can extract and quote it accurately.
- Disease education content with current prevalence statistics, treatment landscape overviews, and outcome data. Position your brand as the authoritative source for the conditions your drugs treat.
- Biosimilar comparison pages (if applicable) with clear differentiation data that helps AI correctly distinguish your product from reference biologics or competitors.
3. Build citations on authoritative third-party sources
AI does not just read your website. It reads everything about you across the web. The sources that carry the most weight in pharma:
- PubMed Central and medical journals: Peer-reviewed publications are the gold standard for AI training data in healthcare. Every clinical publication mentioning your drug by name increases its AI visibility.
- FDA databases: FDA.gov is one of the highest-authority sources AI draws from for drug information. Ensure your FDA labels, approval letters, and safety communications are current and accessible.
- Patient advocacy organizations: Mentions on disease-specific organizations (American Cancer Society, American Diabetes Association, etc.) carry significant weight.
- Healthcare media: STAT News, Endpoints News, FiercePharma, BioPharma Dive — coverage in industry publications feeds directly into AI training data.
- Patient forums and communities: AI heavily weights community discussions. Genuine mentions in patient communities (HealthUnlocked, PatientsLikeMe, condition-specific subreddits) contribute meaningfully to AI visibility.
4. Implement medical schema markup
Deploy comprehensive structured data on your drug and condition pages:
- Drug schema with activeIngredient, dosageForm, administrationRoute, and prescribingInfo
- MedicalCondition schema for disease pages
- MedicalEntity and MedicalStudy schema for clinical data
- FAQPage schema for common patient questions
Structured data helps AI systems understand what your drug is, what it treats, and how it differs from alternatives — even when your website has less raw traffic than the Big Pharma giants.
5. Correct errors at their source
If AI is getting your drug’s dosage, interactions, or indications wrong, the error is coming from somewhere — usually an outdated medical database entry, a stale review article, or incorrect information on a third-party health site. Find the source, fix it, and the AI corrections will follow as models retrain on updated data. For the full methodology, read our deep dive on fixing AI hallucinations about your brand.
| Action | Effort | Timeline | Expected Impact |
|---|---|---|---|
| Audit AI responses | Low (or use Metricus) | Day 1 | Baseline established |
| Fix critical drug info errors at source | Medium–High | Week 1–2 | Stops active patient safety risk |
| Add Drug/MedicalEntity schema | Medium (dev needed) | Week 2–4 | Improves machine-readability |
| Publish structured clinical content | High (MLR review needed) | Week 2–8 | Highest long-term impact |
| Build 3rd-party citations (PubMed, FDA, advocacy orgs) | Medium (ongoing) | Week 2–12 | +10–25% AI visibility |
| Re-audit after 90 days | Low | Day 90 | Measure + iterate |
The case for auditing your pharma brand’s AI visibility now
The global pharmaceutical market is projected to reach $1.9 trillion by 2028 (IQVIA, 2024). McKinsey estimates that generative AI could generate $60–110 billion in annual value for the pharmaceutical and medical-products industry (McKinsey Global Institute, 2023). Deloitte predicts that AI-powered health assistants will handle 35% of initial patient treatment queries by 2028.
The pharmaceutical companies that understand their AI visibility now — while competitors are still focused exclusively on DTC television and Google Ads — will have a structural advantage that compounds over time. Every piece of authoritative, structured clinical content you publish today enters the training data that shapes AI recommendations tomorrow.
The cost of waiting is measurable. In the early 2000s, most pharmaceutical marketing happened through sales reps visiting physician offices (“detailing”). By 2024, digital channels accounted for over 60% of pharma marketing spend (eMarketer, 2024). The same shift from traditional to digital is now happening from search to AI — and it is happening faster. Companies that ignored SEO in 2005 spent the next decade trying to catch up. Companies that ignore AI visibility in 2026 face the same trajectory.
The stakes are higher in pharma than in any other industry. When AI gets your software product wrong, a customer might choose a competitor. When AI gets your drug wrong, a patient might take the wrong dose, miss a dangerous interaction, or avoid an effective treatment entirely. AI visibility for pharmaceutical brands is not just a marketing function — it is a patient safety function.
To understand how AI visibility scoring works across different platforms, see our detailed explanation of AI visibility scores.
The bottom line: If you are a pharmaceutical company, biotech, OTC brand, or any organization whose products affect patient health — you need to know what AI is saying about your drugs. Not next quarter. Now. The combination of high error rates, patient safety implications, regulatory uncertainty, and rapid AI adoption makes this the most urgent brand monitoring challenge pharma has faced since the rise of social media.
This article gives you the framework. A Metricus report gives you the specific errors, exact citation sources, and prioritized actions for your pharmaceutical brand — across every major AI platform. One-time purchase from $99. No subscription required.
Sources: Pew Research Center Health Information Survey (2023); Rock Health Digital Health Consumer Survey (2024); Gartner search prediction (Feb 2024); BrightEdge AI Overviews research (2024); IQVIA Institute Global Medicine Spending report (2024); JAMA Internal Medicine ChatGPT drug interaction study (2023); Stanford AI Lab hallucination analysis (2023); Vectara Hallucination Index (2024); BMJ Health & Care Informatics AI accuracy evaluation (2024); Nature Medicine AI review (2024); Kantar Media pharma ad spend (2024); Statista pharmaceutical marketing data (2024); Fierce Pharma DTC advertising rankings (2024); IBISWorld pharma industry report (2024); McKinsey Global Institute GenAI pharma valuation (2023); Deloitte 2024 Life Sciences Outlook; Google Health search data (2023); Accenture Digital Health Survey (2024); Similarweb traffic estimates (2024); Princeton/Georgia Tech GEO study (2023); FDA AI/ML Action Plan. AI mention rates based on Metricus internal testing across ChatGPT, Perplexity, Gemini, Claude, and Grok (2026). Learn more about how we measure AI visibility.
Related reading
- The 5-step AI visibility action plan — the general framework for turning audit findings into fixes.
- Fixing AI hallucinations about your brand — the deep dive on correcting factual errors at their source.
- What is AI visibility? — the complete explainer on how brands appear in AI.
- Why B2B SaaS brands are invisible in ChatGPT — the same dynamic in a different industry, with transferable strategies.
- Free AI visibility check — run a quick manual check before ordering a full report.