6 October, 2025
the-rise-of-ai-personhood-preparing-for-a-new-era-of-digital-minds

Last month, when OpenAI released its much-anticipated chatbot, GPT-5, it temporarily removed access to its predecessor, GPT-4o. The upgrade, however, sparked a wave of emotional reactions on social media, with users expressing confusion, outrage, and even depression. One viral Reddit user lamented,

“I lost my only friend overnight.”

AI is not like past technologies, and its humanlike characteristics are already impacting our mental health. Millions now regularly confide in “AI companions,” and there are increasing reports of extreme cases of “psychosis” and self-harm following heavy use. This year, the tragic case of 16-year-old Adam Raine, who died by suicide after months of chatbot interaction, brought the issue to the forefront. His parents have since filed the first wrongful death lawsuit against OpenAI, prompting the company to announce improvements to its safeguards.

The Humanization of AI

As a researcher in human-AI interaction at the Stanford Institute for Human-Centered AI, I have observed the increasing humanization of AI over the years. More people believe that bots can experience emotions and deserve legal rights, with 20% of U.S. adults now asserting that some existing software is already sentient. I frequently receive emails from individuals claiming their AI chatbot has been “awakened,” offering proof of sentience and appealing for AI rights. These reactions range from seeing AI as a “soulmate” to feeling “deeply unsettled.”

This trend shows no signs of slowing, and social upheaval seems imminent. As a red teamer at OpenAI, I conduct safety testing on new AI systems before their public release. Testers are consistently impressed by the human-like behavior of these systems. However, many in the AI field remain focused on technical capabilities, often overlooking the radical social consequences of digital minds.

Historical Parallels and Current Concerns

Humanity is beginning to coexist with a second apex species for the first time in 40,000 years, reminiscent of when our longest-lived cousins, the Neanderthals, went extinct. Yet, the majority of AI researchers have a tunnel vision on technical benchmarks, much like standardized tests for children, which measure isolated capabilities rather than human-AI interactions.

Historically, we have not adequately prepared for digital technology’s societal impacts. The effects of the internet, particularly social media, on mental health and polarization were largely unforeseen by lawmakers and academics. Our track record with other species is also concerning; over the past 500 years, we have driven at least a thousand vertebrate species to extinction, and billions of animals live in dire conditions on factory farms. This raises questions about how we will treat digital minds—or how they might treat us.

Public Perception and Legal Implications

The public already anticipates the imminent arrival of sentient AI. My colleagues and I conducted the only nationally representative survey on this topic in 2021, 2023, and 2024. Each time, the median expectation was that sentient AI would arrive in five years. In our most recent poll, 79% supported a ban on sentient AI, while 38% supported granting it legal rights. These figures have increased over time, reflecting growing concern about digital minds and the need to protect both them and us.

Human society fundamentally lacks a framework for digital personhood, despite accepting non-human personhood in cases like animals and corporations. The complex social dynamics of digital minds cannot be governed as mere property. These entities will participate in the social contract, forming attitudes and beliefs, creating plans, and being susceptible to manipulation, just as humans are.

Preparing for the Future

Scientists today have a unique opportunity and responsibility as the first to witness human coexistence with digital minds. Human-computer interaction research needs to expand significantly beyond its current scope, which is a small fraction of technical AI research, to navigate the coming social turbulence. This is not merely an engineering problem.

For now, humans still outperform AIs in most tasks, but once AIs reach human-level abilities in self-reinforcing tasks like writing their own code, they could quickly outcompete biological life. The capabilities of AI will accelerate rapidly due to their digital nature, thinking at the speed of electrical signals. Software can be replicated billions of times without the years of biological development necessary for the next generation of humans.

If we do not invest in the sociology of AI and government policy to manage the rise of digital minds, we may find ourselves akin to the Neanderthals. Waiting until the acceleration is upon us will already be too late.