20 January, 2026
artificial-intelligence-a-normal-technology-or-a-revolutionary-force-

Opinions about artificial intelligence (AI) span a wide spectrum. On one end, some see AI as a catalyst for unprecedented economic growth and scientific advancement, potentially leading to human immortality. On the other, there are fears of AI causing massive job losses and even posing existential threats to humanity. However, a paper published earlier this year by Arvind Narayanan and Sayash Kapoor, computer scientists at Princeton University, offers a different perspective, treating AI as a “normal technology.”

This perspective has sparked significant debate among AI researchers and economists. Narayanan and Kapoor argue against the notion that AI is an unprecedented intelligence capable of determining its own future. Instead, they propose that AI will likely follow the trajectory of past technological revolutions, impacting adoption, jobs, risks, and policy in a more conventional manner.

Adoption versus Innovation

The authors contend that the pace of AI adoption has been slower than the rate of innovation. While many people use AI tools, their usage remains a small fraction of overall working hours in America. This lag is not unexpected, as adapting to new technologies takes time for both individuals and companies.

Factors such as tacit, organization-specific knowledge, data format constraints, and regulatory challenges further slow down adoption. Historical parallels can be drawn with the electrification of factories a century ago, which required a complete overhaul of floor layouts and processes, taking decades to implement fully.

Moreover, the pace of AI innovation itself may be constrained by the need for extensive real-world testing in areas like drug development and self-driving cars. These processes are often slow and costly, particularly in safety-critical fields subject to strict regulations. Consequently, the economic impacts of AI are likely to be gradual rather than causing abrupt automation of significant economic sectors.

AI and the Future of Work

Even a gradual spread of AI will transform the nature of work. As more tasks become automatable, a greater percentage of human jobs will involve AI control. This shift is reminiscent of the Industrial Revolution, where workers transitioned from manual tasks to supervising machines and handling situations beyond machine capabilities.

Narayanan and Kapoor suggest that instead of AI eliminating jobs wholesale, it will lead to roles focused on configuring, monitoring, and controlling AI systems. Without human oversight, AI may be “too error-prone to make business sense,” they speculate.

Rethinking AI Risks

The authors also challenge the emphasis on “alignment” in AI models, which aims to ensure outputs align with human creators’ goals. They argue that whether an AI output is harmful often depends on context, which humans understand but models may lack. For instance, an AI tasked with writing a persuasive email cannot discern if it will be used for legitimate marketing or phishing.

“Trying to make an AI model that cannot be misused is like trying to make a computer that cannot be used for bad things,” the authors write.

Instead, they advocate for defenses against AI misuse, such as creating malware or bioweapons, to focus on strengthening existing cyber-security and biosafety measures. This approach would also enhance resilience to threats not involving AI.

Policy Implications and Future Outlook

The authors’ viewpoint suggests a range of policies to mitigate risks and bolster resilience. These include whistleblower protection, compulsory disclosure of AI usage, registration to track deployment, and mandatory incident-reporting, drawing parallels with existing practices in data protection, vehicle registration, and cyber-attack reporting.

While the paper offers a grounded perspective on AI, it is not without criticism. Some argue that it downplays potential labor-market disruptions, underestimates AI adoption speed, and is dismissive of misalignment and deception risks. Their assertion that AI will not outperform trained humans in forecasting or persuasion appears overly confident.

Nonetheless, many may find the rejection of AI exceptionalism refreshing. The middle-ground view, less dramatic than predictions of rapid AI dominance or apocalypse, often receives less attention. The authors believe articulating this position is valuable because “some version of our worldview is widely held.”

Amid current concerns about AI investment sustainability, their paper provides a sober alternative to AI hysteria, emphasizing that lessons from past technologies can guide sensible AI policies.