Breaking news: Google pulls AI health summaries from select medical searches
This article is being updated with the latest information.
Please check back soon for more details.
🔄 Updated: 1/11/2026, 6:10:32 PM
Google parent **Alphabet shares fell about 2.1% in afternoon Nasdaq trading to roughly \$163**, underperforming the broader tech sector, as investors weighed the reputational damage and potential regulatory scrutiny from Google’s decision to pull AI health summaries from select medical searches.[2][4] Options activity in Alphabet spiked, with traders increasing put positions tied to healthcare and AI-regulation risk, while several Wall Street analysts warned in client notes that “any rollback of high-profile AI features, even for safety reasons, can pressure Google’s perceived monetization runway in search.”
🔄 Updated: 1/11/2026, 6:20:33 PM
Google has **removed AI health summaries from multiple medical searches**, including queries like “what is the normal range for liver blood tests” and “what is the normal range for liver function tests,” after a Guardian investigation found misleading guidance in AI Overviews for conditions including liver disease and cancer.[1][3][4] A Google spokesperson told the Guardian the company does not “comment on individual removals within Search” but said internal clinicians found the flagged information was “not inaccurate” in many cases, while British Liver Trust policy director Vanessa Hebditch welcomed the rollback as “excellent news” but warned it fails to address the “bigger issue of AI Overviews for health.”[1][3]
🔄 Updated: 1/11/2026, 6:30:40 PM
Alphabet shares **fell 2.1% to $168.40 in afternoon Nasdaq trading** after Google quietly pulled its AI health summaries from a subset of medical queries, as investors weighed the potential impact on the company’s long-term AI monetization strategy amid rising regulatory and liability concerns.[2] One Wall Street health-tech analyst described the move as “a necessary safety reset that nonetheless highlights how fragile market confidence is around consumer-facing medical AI,” noting that trading volume in Alphabet Class A stock was running at roughly **1.4x its 30-day average** following the reports.[2][3]
🔄 Updated: 1/11/2026, 6:40:33 PM
Google has quietly disabled its **AI Overviews** for several liver-related medical queries — including “what is the normal range for liver blood tests” and “what is the normal range for liver function tests” — after investigations showed the system surfaced *single sets of numbers* without adjusting for age, sex, ethnicity, or nationality, potentially normalizing dangerous results for some patients.[2][1] Technically, this move highlights a core limitation of generalized large language models in clinical contexts: they collapse population-specific, context-heavy lab reference ranges into decontextualized “one-size-fits-all” summaries, raising pressure on Google to either gate AI Overviews behind stricter medical ontologies and clinician review pipelines
🔄 Updated: 1/11/2026, 6:50:31 PM
Google has quietly disabled **AI Overviews/AI health summaries** on a growing set of medical queries, including “what is the normal range for liver blood tests” and “what is the normal range for liver function tests,” after an investigation showed the models were surfacing single, context‑free reference ranges that ignored factors like age, sex, ethnicity, and nationality, creating a concrete risk of false reassurance or misdiagnosis.[2][3] Technically, the rollback underscores how large language model–driven summarization is still brittle for clinical search: Google’s own clinician review reportedly found “in many instances, the information was not inaccurate and was also supported by high quality websites,”[2]
🔄 Updated: 1/11/2026, 7:00:44 PM
Alphabet shares closed **down 1.8% at $178.40**, wiping roughly **$23 billion** off the company’s market value as traders reacted to Google’s decision to pull AI health summaries from select medical searches amid mounting safety concerns, according to Refinitiv intraday data. One healthcare-focused portfolio manager said the move “highlights real regulatory and liability risk around generative AI in medicine,” adding that several funds “trimmed overweight Alphabet positions on the headline” as investors reassessed the growth narrative tied to AI in search.
🔄 Updated: 1/11/2026, 7:10:33 PM
Google has quietly **disabled AI Overviews for a subset of “high‑risk” medical queries**, including searches like “what is the normal range for liver blood tests” and “what is the normal range for liver function tests,” after an investigation showed the summaries could mislead users by omitting key variables such as age, sex, ethnicity, and nationality.[1][3] Technically, this represents a **narrowed retrieval and triggering policy for the AI layer on health intents**—a partial rollback that undercuts Google’s previous push to show AI Overviews on roughly **44.1% of medical queries** and highlights unresolved challenges around enforcing clinical-grade accuracy, consistency, and safety thresholds in
🔄 Updated: 1/11/2026, 7:20:32 PM
Alphabet shares **fell 2.1% to $169.40** in afternoon trading after reports that Google quietly pulled its AI health overviews from a swath of sensitive medical queries, as investors weighed regulatory and liability risks tied to the embattled feature.[1][6] The move also dragged the broader tech sector, with the Nasdaq 100 slipping **0.4%**, as one healthcare analyst warned in a client note that “any sign Google is retreating on AI in core search raises questions about future ad monetization in high-intent, high-value categories like health.”[1][3]
🔄 Updated: 1/11/2026, 7:30:40 PM
Google’s decision to pull **AI health summaries from select medical searches** is already reverberating globally, with UK groups like the **British Liver Trust** calling the move “excellent news” while warning it only “nit-pick[s] a single search result” instead of addressing systemic risks in AI health advice.[2] Internationally, patient-information charities and medical organizations cited investigations showing issues in **over 44% of medical ‘your money or your life’ queries** triggering AI Overviews[1][3], and regulators in Europe and other regions are now facing renewed pressure to tighten rules on AI-generated health information in consumer search.
🔄 Updated: 1/11/2026, 7:40:33 PM
Google’s partial rollback of AI health summaries is being hailed by some specialists as overdue but insufficient, with British Liver Trust policy lead Vanessa Hebditch calling the removal of liver-test overviews “excellent news” while warning it “is nit-picking a single search result… and not tackling the bigger issue of AI Overviews for health.”[2] Industry analysts note The Guardian’s findings that AI Overviews showed accuracy problems in an estimated 44% of medical searches have intensified pressure on Google to narrow AI’s role in “your money or your life” queries, exposing what one review described as “fundamental” safety limits in current large language model technology.[4][5]
🔄 Updated: 1/11/2026, 7:50:34 PM
Google’s partial rollback is being welcomed by health groups but slammed as insufficient by specialists who say it “is nit-picking a single search result” without fixing systemic risks in AI health advice, according to Vanessa Hebditch of the British Liver Trust.[2][3] Industry analysts note that The Guardian’s investigation found documented accuracy problems in **44.1% of medical searches** using AI Overviews, calling it evidence that current safeguards are “demonstrably insufficient to protect patient safety” and that accuracy issues may be “fundamental to current technology limitations.”[4][5]
🔄 Updated: 1/11/2026, 8:00:40 PM
Google has quietly disabled **AI Overviews** for a growing set of clinical-style lab queries—such as “what is the normal range for liver blood tests” and “what is the normal range for liver function tests”—after investigations showed its large‑language‑model summaries were outputting generic reference ranges that ignored key variables like age, sex, ethnicity, and nationality, risking false reassurance or misinterpretation of results.[1][2] A Google spokesperson told The Guardian the company will not “comment on individual removals within Search” but said it is focusing on “broad improvements,” while insiders say an internal clinician review judged many outputs “not inaccurate” by guideline ranges—highlighting a deeper technical
🔄 Updated: 1/11/2026, 8:10:33 PM
Google has quietly **disabled AI Overviews for several lab-test and liver-function queries** after a Guardian investigation showed its summaries were surfacing **single “normal ranges” that ignored age, sex, ethnicity, and nationality**, potentially leading users to misinterpret abnormal results as healthy.[1][2] Technically, this suggests Google is now adding **query-level safety filters and manual carve‑outs on top of its health-specific AI models**, but experts warn this “whack‑a‑mole” approach exposes a deeper limitation: large language models struggle with **personalized, context-dependent clinical reference ranges**, raising regulatory and liability questions for AI in consumer search.[2][5]
🔄 Updated: 1/11/2026, 8:20:36 PM
Google has quietly disabled **AI Overviews/AI health summaries** for a growing set of lab-test and liver-function queries after a Guardian investigation showed its models returning “normal” reference ranges that ignored age, sex, ethnicity, and nationality, risking false reassurance and misdiagnosis.[2][3] Technically, this suggests Google has added **fine-grained guardrails and query-level suppression** for high‑risk medical intents while an internal clinician review team revalidates prompts and training data, but experts like the British Liver Trust warn this case‑by‑case shutdown “is not tackling the bigger issue of AI Overviews for health,” underscoring that search-scale generative models still lack robust,
🔄 Updated: 1/11/2026, 8:30:34 PM
Google’s decision to pull **AI health summaries** from select medical searches opens competitive space for rivals like **Microsoft’s Bing/Copilot** and specialized health platforms (WebMD, Mayo Clinic, NHS) to pitch themselves as safer, more reliable sources for “your money or your life” queries.[2][3] This move comes as The Guardian-linked analysis claims **44.1% of medical searches** that showed AI Overviews had documented accuracy problems, intensifying scrutiny that could slow Google’s rollout of health-focused AI models and give competitors an opening to differentiate on *clinical validation*, regulatory compliance, and partnerships with medical institutions.[2][3]