18 March, 2026
google-ai-overviews-prefer-youtube-over-medical-sites-for-health-queries-study

Google’s AI Overviews feature, designed to answer health-related queries, cites YouTube more frequently than any medical website, according to a recent study. This revelation raises critical questions about the reliability of a tool accessed by over 2 billion users monthly.

The company has long maintained that its AI summaries, which appear prominently at the top of search results, are trustworthy and draw from reputable medical sources such as the Centers for Disease Control and Prevention and the Mayo Clinic. However, research conducted by SE Ranking, a search engine optimization platform, challenges this claim.

Study Reveals YouTube as Top Source

The study analyzed over 50,000 health queries conducted in Berlin, finding that YouTube was the most frequently cited source in AI Overviews. The video-sharing platform, which is the world’s second most visited website after Google, accounted for 4.43% of all citations. This percentage far exceeded any hospital network, government health portal, or academic institution.

Researchers expressed concern over this finding, noting, “YouTube is not a medical publisher. It is a general-purpose video platform where anyone, from board-certified physicians to wellness influencers, can upload content.”

Google’s Response and Study Limitations

In response to the findings, Google emphasized that AI Overviews are designed to highlight high-quality content from reputable sources, regardless of format. The company also pointed out that YouTube hosts content from various credible health authorities and licensed medical professionals.

The study, however, was limited to German-language queries within Germany, a country with a healthcare system regulated by both German and EU directives. Researchers suggested that if AI systems depend heavily on non-authoritative sources even in such a regulated environment, the issue might be more widespread.

Concerns Over Misinformation

The research follows a Guardian investigation that uncovered instances where Google AI Overviews provided misleading health information. In one alarming case, incorrect details about liver function tests were given, potentially leading individuals with serious liver conditions to believe they were healthy. Google subsequently removed AI Overviews for some medical searches.

Expert Opinions and Broader Implications

Hannah van Kolfschooten, an AI, health, and law researcher at the University of Basel, commented on the study, stating, “This study provides empirical evidence that the risks posed by AI Overviews for health are structural, not anecdotal. It becomes difficult for Google to argue that misleading or harmful health outputs are rare cases.”

She further noted, “The findings show that these risks are embedded in the way AI Overviews are designed. The reliance on YouTube rather than public health authorities or medical institutions suggests that visibility and popularity, rather than medical reliability, is the central driver for health knowledge.”

Google’s Defense and Future Outlook

Google defended the reliability of AI Overviews, citing the study’s data that the most referenced domains are reputable. The company highlighted that 96% of the most cited YouTube videos were from medical channels, with content created by licensed or trusted sources.

However, researchers cautioned that these videos represented less than 1% of all YouTube links cited by AI Overviews. “Most of them (24 out of 25) come from medical-related channels like hospitals, clinics, and health organizations,” they wrote. “But it’s important to remember that these 25 videos are just a tiny slice of all YouTube links AI Overviews actually cite.”

The study underscores the need for Google to reassess how AI Overviews prioritize sources, particularly in the sensitive area of health information. As AI continues to play a significant role in disseminating information, ensuring the accuracy and reliability of such data remains paramount. The findings may prompt further scrutiny and calls for transparency in AI-driven content curation.