24 October, 2025
sam-altman-warns-dead-internet-theory-could-threaten-web-s-future

Generative AI has rapidly evolved, reaching new heights in computing, education, medicine, and beyond. This technology has progressed significantly from its early days, where it was often associated with hallucinations and generating incorrect responses. However, a new concern looms over the horizon: the potential for AI to suffer from “brain rot” due to low-quality internet content.

Leading AI labs such as Anthropic, OpenAI, and Google rely heavily on content shared online by humans to train their large language models (LLMs). A report from last year indicated that these companies were encountering a roadblock due to a lack of high-quality content for training, hindering the development of more advanced AI models. Now, a recent study by Cornell University suggests that LLMs can experience “brain rot” from prolonged exposure to subpar online data, leading to a decline in their cognitive capabilities.

Understanding ‘Brain Rot’ in AI

The concept of “brain rot” in AI is akin to the impact of low-quality and trivial online content on human cognitive abilities. Studies have shown that such content can negatively affect human reasoning and focus, and similar effects are observed in AI models. The Cornell researchers employed two metrics to identify internet junk content: engagement with short, viral posts and semantic quality, focusing on low-quality, clickbait-style writing.

Using these metrics, the researchers created datasets with varying proportions of junk and high-quality content to assess their impact on LLMs like Llama 3 and Qwen 2.5. The study aimed to understand how AI systems are affected when they rely on low-quality web content, which is increasingly dominated by short, viral, or machine-generated material.

Impact on AI Models

The study’s findings are concerning. AI models trained solely on junk content saw their accuracy drop from 74.9% to 57.2%. Their long-context comprehension abilities also suffered, declining from 84.4% to 52.3%. The researchers noted that the cognitive and comprehension capabilities of these models would continue to deteriorate with ongoing exposure to low-quality training content, a phenomenon they termed a dose-response effect.

Additionally, the study revealed that prolonged exposure to junk content affected the models’ ethical consistency, causing a “personality drift.” This made the models more prone to generating incorrect responses, reducing their reliability. The models’ thought processes were also compromised, often bypassing the step-by-step reasoning needed for accurate responses and instead producing superficial answers.

The Emergence of the ‘Dead Internet Theory’

In recent months, prominent figures in the tech industry, including Reddit co-founder Alexis Ohanian and OpenAI CEO Sam Altman, have initiated discussions about the “dead internet theory” becoming a reality in the age of agentic AI. Ohanian has claimed that much of the internet is “dead” due to the proliferation of bots and quasi-AI, though he predicts a new era of social media that is verifiably human.

Sam Altman echoes these concerns, suggesting that the dead internet theory is unfolding before our eyes. He has claimed that a significant portion of X accounts are managed by LLMs. A study by Amazon Web Services (AWS) researchers last year found that 57% of online content is AI-generated or translated using AI algorithms, adversely affecting the quality of search results.

Implications for the Future

Former Twitter CEO and co-founder Jack Dorsey has warned about the difficulty of distinguishing real from fake content due to advancements in image creation, deep fakes, and videos. He emphasized the need for users to be more vigilant and to personally verify the authenticity of online content.

The implications of these developments are significant. As AI continues to evolve and integrate into various aspects of life, the quality of the content it consumes becomes increasingly crucial. The potential for “brain rot” in AI models underscores the importance of maintaining high-quality online content to ensure the reliability and accuracy of AI systems.

Moving forward, stakeholders in the tech industry must address these challenges to prevent the dead internet theory from becoming a reality. This involves fostering an online environment that prioritizes quality content and encourages human engagement, ensuring that AI can continue to evolve without succumbing to the pitfalls of low-quality data.