8 January, 2026
personalized-algorithms-may-distort-learning-and-reality-study-finds

The same personalized algorithms that curate online content based on previous user choices on platforms like YouTube may also impair learning, a new study suggests. Researchers discovered that when an algorithm dictated what information was shown to study participants on an unfamiliar subject, they often narrowed their focus, exploring only a limited subset of the available information.

As a result, participants frequently answered incorrectly when tested on the information they were supposed to learn, yet remained overly confident in their inaccurate answers. The findings are alarming, according to Giwon Bahg, who led the study as part of his doctoral dissertation in psychology at The Ohio State University.

Implications of Algorithmic Learning Bias

Many studies on personalized algorithms have traditionally concentrated on how they might influence people’s beliefs on political or social issues with which they have some familiarity. However, Bahg emphasizes that their research demonstrates these algorithms can start building biases immediately, even when users know nothing about a topic.

“But our study shows that even when you know nothing about a topic, these algorithms can start building biases immediately and can lead to a distorted view of reality,” said Bahg, now a postdoctoral scholar at Pennsylvania State University.

The study, published in the Journal of Experimental Psychology: General, suggests that individuals may have little trouble taking the limited knowledge they gain from personalized algorithms and forming broad generalizations, according to study co-author Brandon Turner, professor of psychology at Ohio State.

“People miss information when they follow an algorithm, but they think what they do know generalizes to other features and other parts of the environment that they’ve never experienced,” Turner explained.

Understanding the Experiment

The researchers illustrated how algorithmic personalization could lead to inaccurate generalizations during learning with a hypothetical scenario. Imagine a person unfamiliar with movies from a certain country who relies on an on-demand streaming service for recommendations. If the service suggests an action-thriller film first, the person may continue watching similar genres, skewing their understanding of the country’s cinematic landscape.

Bahg and his colleagues conducted an online experiment with 346 participants to test this phenomenon. Using a fictional setup, participants were tasked with learning to identify categories of crystal-like aliens with six varying features. Some participants had to sample all features to form a complete picture, while others were guided by a personalization algorithm that encouraged selective sampling.

The findings revealed that participants exposed to algorithm-driven features sampled fewer features in a consistently selective manner. When tested on new information, they often miscategorized it based on their limited knowledge, yet remained confident in their incorrect assessments.

“They were even more confident when they were actually incorrect about their choices than when they were correct, which is concerning because they had less knowledge,” Bahg noted.

Broader Implications and Future Concerns

Turner highlighted the real-world implications of these findings, particularly for younger audiences engaging with algorithm-driven content online. He questioned the potential consequences for children genuinely trying to learn about the world while interacting with algorithms that prioritize content consumption over educational value.

“Consuming similar content is often not aligned with learning. This can cause problems for users and ultimately for society,” Turner stated.

Vladimir Sloutsky, professor of psychology at Ohio State and co-author of the study, echoed these concerns, emphasizing the need for further research into the long-term effects of algorithmic content personalization on learning and perception.

The study’s revelations underscore the importance of understanding how personalized algorithms influence not just what we believe, but how we learn and perceive the world around us. As digital platforms continue to evolve, the conversation around ethical algorithm design and its impact on society becomes increasingly critical.