6 September, 2025
the-rising-threat-of-ai-generated-research-in-science

In the early 2000s, the American pharmaceutical giant Wyeth faced lawsuits from thousands of women who developed breast cancer after using its hormone replacement drugs. Court documents exposed the company’s use of “dozens of ghostwritten reviews and commentaries published in medical journals” to promote unsubstantiated benefits while downplaying the drugs’ risks. These articles, produced by a medical communications firm and published under the names of prominent doctors, masked Wyeth’s involvement, misleading medical professionals who relied on them for prescription guidance.

Wyeth, later acquired by Pfizer in 2009, defended the scientific accuracy of these articles, stating that hiring ghostwriters was a common industry practice. Ultimately, Pfizer paid over $1 billion in damages for the drugs’ harmful effects. This case exemplifies “resmearch”—research manipulated to serve corporate interests rather than scientific truth.

The Emergence of AI-Generated Research

In recent years, similar tactics have been observed across various industries, with soft drink and meat producers funding studies that minimize health risks associated with their products. The advent of AI tools now poses a new threat by drastically reducing the cost and time required to produce such research. Previously, crafting a single paper could take months; today, AI enables the creation of multiple seemingly valid papers within hours.

Public health literature is already witnessing a surge in studies utilizing AI-optimized data to report single-factor results. These studies often link a single factor, such as egg consumption, to health outcomes like dementia. However, when datasets encompass thousands of individuals and extensive data points, misleading correlations are bound to emerge by chance.

Between 2014 and 2021, an average of four single-factor studies were published annually. In the first ten months of 2024 alone, 190 such studies have been released.

While not all of these studies are driven by corporate motives—some may be academics seeking to bolster their publication records—the ease of AI-facilitated research presents a tempting opportunity for businesses aiming to promote products.

Regulatory Changes and Industry Incentives

In the UK, new government guidelines now require baby-food producers to back marketing claims of health benefits with scientific evidence. While well-intentioned, this policy may inadvertently encourage companies to generate AI-assisted “scientific evidence” to support their claims, increasing demand for such research.

Addressing the Issue

One significant problem is that not all research undergoes peer review before influencing policy. For instance, in 2021, US Supreme Court Justice Samuel Alito cited a briefing paper on gun use funded by a pro-gun nonprofit, the Constitutional Defence Fund. The lack of publicly available survey data and the academic’s refusal to answer questions render it impossible to verify the research’s integrity, yet it has been used in legal cases nationwide.

The lesson here is twofold: first, research not subjected to peer review should be approached with caution. Second, the peer review process itself requires reform. Discussions have been ongoing about the explosion of published research and the effectiveness of reviewers.

Recent advancements include preregistration of research plans, transparent reporting of study steps, and specification curve analyses for single-factor papers to test robustness.

Many journal editors have adopted these measures, updating rules to require authors to publish data, code, and materials used in experiments, and disclose conflicts of interest and funding sources. Some journals now mandate authors to cite all similar secondary analyses and disclose AI usage in their work.

The Need for Robust Peer Review

While some fields, such as psychology, have embraced these reforms more than others, like economics, the current system struggles to handle the influx of AI-generated papers. Reviewers must dedicate time and effort to scrutinize preregistrations, specification curve analyses, and other critical components.

Public trust in science remains high globally, a positive sign as the scientific method prioritizes truth and meaningful findings over popularity or profit. However, AI threatens to distance us from this ideal. To preserve science’s credibility, we must urgently incentivize rigorous peer review.