
Amazon’s marketplace is currently under scrutiny for selling books that claim to offer expert advice on managing ADHD but appear to be authored by artificial intelligence, such as ChatGPT. These AI-generated books are easy and inexpensive to publish, yet they often contain misleading or dangerous information, raising alarms among experts and consumers alike.
The issue is not isolated to ADHD guides. The platform has seen a surge in AI-produced content, including questionable travel guides and mushroom foraging books that could potentially encourage risky behaviors. This influx of AI-authored material highlights a significant gap in the regulation of digital marketplaces.
AI Detection and Expert Concerns
Originality.ai, a U.S.-based company specializing in detecting AI-generated content, examined samples from eight books available on Amazon’s site. The company reported a 100% AI detection score for each book, indicating a high confidence that these works were authored by chatbots. This finding has sparked concern among experts about the potential spread of misinformation.
Michael Cook, a computer science researcher at King’s College London, emphasized the risks posed by generative AI systems. “These systems are known to give dangerous advice, such as ingesting toxic substances or ignoring health guidelines,” Cook stated. He expressed frustration at the increasing presence of AI-authored books on health topics, warning that they could lead to misdiagnosis or exacerbate existing conditions.
“Generative AI systems like ChatGPT may have been trained on a lot of medical textbooks and articles, but they’ve also been trained on pseudoscience, conspiracy theories, and fiction,” said Cook.
The Ethical Dilemma for Online Marketplaces
Amazon’s business model, which profits from each book sold, whether reliable or not, has been criticized for incentivizing the proliferation of such content. Cook noted that the generative AI companies responsible for these products are not held accountable for the misinformation they may spread.
Prof. Shannon Vallor, director of the University of Edinburgh’s Centre for Technomoral Futures, highlighted Amazon’s ethical responsibility to prevent harm to consumers. However, she acknowledged the complexity of holding a bookseller accountable for all content, given the sheer volume of publications.
“Problems arise because the guardrails previously deployed in the publishing industry have been completely transformed by AI,” Vallor explained.
The current regulatory environment, described as a “wild west,” lacks meaningful consequences for those enabling harmful content, leading to a “race to the bottom,” Vallor added.
Consumer Experiences and Reactions
Richard Wordsworth, who recently received an adult ADHD diagnosis, experienced firsthand the pitfalls of AI-authored books. After his father recommended a book found on Amazon, Wordsworth quickly noticed its strange tone and inaccuracies. The book included a quote from conservative psychologist Jordan Petersen and contained random anecdotes and historical errors.
Some advice in the book was deemed harmful, such as a chapter on emotional dysregulation that warned against expecting forgiveness from friends and family for impulsive anger. Wordsworth discovered the supposed author had an AI-generated headshot and lacked qualifications. Further research revealed alarming warnings about his condition, exacerbating his distress.
“If he can be taken in by this type of book, anyone could be,” Wordsworth remarked, expressing concern for others who might be misled by such content.
Amazon’s Response and Future Implications
In response to the controversy, an Amazon spokesperson stated, “We have content guidelines governing which books can be listed for sale and we have proactive and reactive methods that help us detect content that violates our guidelines, whether AI-generated or not.” The company emphasized its commitment to evolving its processes and guidelines to address changes in publishing.
As the debate over AI-authored books continues, the need for clearer regulations and accountability measures becomes increasingly apparent. While AI technology offers potential benefits, its application in sensitive areas like health and wellness requires careful oversight to protect consumers from misinformation.
Looking forward, industry experts and policymakers will need to collaborate to establish standards that ensure the integrity and safety of digital content, balancing innovation with ethical responsibility.