26.3 C
Africa

Exposed: How Google Launched — Then Buried — Its AI Medical Crowdsourcing Tool

Date:

A detailed look at the rise and fall of Google’s “What People Suggest” reveals a pattern of optimistic launches followed by quiet retreats in the company’s health AI strategy. The feature, which used AI to surface community health advice from internet discussions, was confirmed as discontinued by Google following inquiries from journalists. Three insiders had already confirmed the tool was no longer active.
Announced at a health conference hosted by Google in New York in spring of last year, the feature was presented with considerable ambition. Then-chief health officer Karen DeSalvo described it as a way to help people benefit from the experiences of others who had navigated similar health conditions. The AI organized forum content into readable themes and provided links for further exploration.
The company’s explanation for removing the feature — search simplification — was met with skepticism when the cited public announcement turned out not to reference the feature at all. Critics noted that the contrast between the enthusiastic launch and the muted, unexplained removal reflected poorly on Google’s approach to health AI accountability.
The incident sits within a larger context of Google’s difficulties with health misinformation. An investigation earlier this year found that AI Overviews on Google Search were displaying false health information to billions of users. Google made limited adjustments after the investigation, but comprehensive reform of its health AI systems has not materialized.
The upcoming “The Check Up” event will give Google another chance to reframe its health AI narrative. But observers will be looking for evidence that the company has learned from the “What People Suggest” episode — and that future health AI products will be developed and, if necessary, retired with greater care and transparency. That shift, if genuine, could begin to rebuild trust.

Subscribe to our magazine

━ more like this

Microsoft Tells Federal Court That Pentagon’s Anthropic Blacklist Threatens the Entire Defense AI Ecosystem

Microsoft has told a federal court in San Francisco that the Pentagon's decision to blacklist Anthropic as a supply-chain risk threatens not just one...

Musk’s xAI Authorized for 41-Turbine Power Plant Amid Community Backlash

Mississippi state regulators have officially greenlit a 41-turbine natural gas power plant for Elon Musk’s xAI. The permit allows the company to power its...

OpenAI Secures Sole-Source Dominance in Defense Following Trump’s Anthropic Blacklist

The landscape of military artificial intelligence has undergone a seismic shift, effectively granting OpenAI a near-monopoly on high-level Pentagon contracts. By stepping into the...

Nvidia Commits $30 Billion to OpenAI in a Deal That Could Define the AI Decade

Few investment announcements carry the symbolic and financial weight of Nvidia's reported $30 billion commitment to OpenAI. Coming after the dramatic collapse of a...

Anthropic Attracts $30 Billion Investment, Valuation Doubles to $380 Billion

Anthropic has secured $30 billion in new funding, driving its valuation to $380 billion and more than doubling its worth from the September figure...