Personalized algorithms, widely used on platforms like YouTube to tailor content based on user preferences, may be impairing our ability to learn effectively, a new study suggests. Conducted by researchers at The Ohio State University, the study found that when algorithms dictated the information shown to participants on unfamiliar subjects, their focus narrowed, leading to a limited exploration of available information.
As a result, participants often provided incorrect answers during tests on the material they were supposed to learn, yet they remained confident in their erroneous responses. The findings raise concerns, according to Giwon Bahg, who led the study as part of his doctoral dissertation in psychology at The Ohio State University.
Understanding Algorithmic Influence on Learning
While many studies have examined how personalized algorithms shape beliefs on political or social issues, Bahg’s research highlights a different concern. “Our study shows that even when you know nothing about a topic, these algorithms can start building biases immediately and can lead to a distorted view of reality,” said Bahg, now a postdoctoral scholar at Pennsylvania State University. The study was published in the Journal of Experimental Psychology: General.
Brandon Turner, a co-author and professor of psychology at Ohio State, noted the potential for these algorithms to encourage sweeping generalizations based on limited knowledge.
“People miss information when they follow an algorithm, but they think what they do know generalizes to other features and other parts of the environment that they’ve never experienced,”
Turner explained.
Experimenting with Algorithmic Learning
The researchers illustrated the potential for inaccurate generalizations with an example involving a person exploring foreign films through an on-demand streaming service. If the algorithm recommends an action-thriller film and the person watches it, the system will suggest more of the same genre, potentially skewing the viewer’s understanding of that country’s cinema.
In their study, Bahg and colleagues tested this phenomenon with 346 participants using a fictional setup involving crystal-like aliens with six distinct features. Participants were tasked with identifying these aliens without knowing the total number of types. In one scenario, participants could explore all features, while in another, a personalization algorithm guided their choices.
The study found that when participants relied on the algorithm, they consistently sampled fewer features and often miscategorized new information based on their limited exposure. Despite their inaccuracies, participants remained confident in their conclusions. “They were even more confident when they were actually incorrect about their choices than when they were correct,” Bahg noted, highlighting the concerning nature of these findings.
Real-World Implications and Future Concerns
Turner emphasized the broader implications of these findings, particularly for young learners.
“If you have a young kid genuinely trying to learn about the world, and they’re interacting with algorithms online that prioritize getting users to consume more content, what is going to happen?”
he questioned. Turner warned that consuming similar content repeatedly is often misaligned with genuine learning, posing potential issues for both individuals and society at large.
Vladimir Sloutsky, another co-author and professor of psychology at Ohio State, echoed these concerns, suggesting that the study’s findings could have significant ramifications for educational practices and content consumption strategies.
As personalized algorithms continue to evolve and play a larger role in how information is curated and consumed, understanding their impact on learning and perception becomes increasingly crucial. This study serves as a reminder of the need for critical engagement with algorithmically delivered content and the importance of fostering diverse and comprehensive learning experiences.