12 September, 2025
ai-driven-research-sparks-concerns-over-corporate-influence-in-science

In the early 2000s, the American pharmaceutical company Wyeth faced a massive lawsuit from thousands of women who developed breast cancer after using its hormone replacement drugs. Court documents revealed that Wyeth had orchestrated the publication of “dozens of ghostwritten reviews and commentaries” in medical journals to promote the drugs’ benefits while downplaying their risks. These articles, produced by a medical communications firm and published under the names of leading doctors, misled medical professionals who relied on them for prescription advice, unaware of Wyeth’s involvement.

Wyeth, acquired by Pfizer in 2009, defended the scientific accuracy of the articles and claimed that hiring ghostwriters was a common industry practice. Ultimately, Pfizer paid over $1 billion in damages for the drugs’ adverse effects. This case exemplifies “resmearch”—scientific research manipulated to serve corporate interests. While most researchers aim to uncover truth, resmearch prioritizes persuasion over accuracy.

The Rise of AI in Scientific Research

Recent years have seen similar instances, with companies in the soft drink and meat industries funding studies that downplay health risks associated with their products. A growing concern is that AI tools significantly reduce the cost and time required to produce such evidence. Previously, crafting a single paper could take months; now, AI allows an individual to generate multiple seemingly valid papers within hours.

The public health literature is already witnessing a surge in AI-assisted studies, often focusing on single-factor results. These studies claim to link a single factor, like egg consumption, to health outcomes such as dementia. Such studies can produce misleading correlations, especially when datasets involve thousands of subjects and numerous variables.

Between 2014 and 2021, an average of four single-factor studies were published annually. In the first ten months of 2024 alone, 190 such studies were published.

While not all these studies are driven by corporate interests—some may result from academics seeking to enhance their publication records—the ease of AI-facilitated research presents a temptation for businesses aiming to promote their products.

Implications of AI-Driven Research

In the UK, new government guidance requires baby-food producers to support health claims with scientific evidence. Although well-intentioned, this policy may encourage firms to seek AI-generated “scientific evidence” that portrays their products favorably.

One significant issue is that research often informs policy without undergoing peer review. In 2021, US Supreme Court Justice Samuel Alito cited a briefing paper funded by a pro-gun nonprofit in a gun rights opinion. The survey data supporting the paper were not publicly available, raising concerns about the validity of the findings, which lawyers have used in cases nationwide.

The lesson is clear: reliance on unreviewed research is risky. Moreover, the peer review process itself requires reform. Discussions have emerged about the explosion of published research and the adequacy of peer reviews.

Reforming the Peer Review Process

Over the past decade, researchers have made strides in identifying methods to reduce the risk of misleading findings. Innovations include preregistration, where authors publish a research plan before starting their work, and transparent reporting of all research steps. Reviewers are tasked with verifying these processes.

For single-factor studies, specification curve analysis tests the robustness of claimed relationships against alternative data interpretations. Many journal editors have adopted these measures, requiring authors to publish data, code, and experimental materials, and disclose conflicts of interest and funding sources.

Some journals now mandate authors to cite all similar secondary analyses and disclose AI’s role in their work.

Fields like psychology have been more proactive in adopting these reforms compared to others, such as economics. A recent study applying additional robustness checks to analyses in the American Economic Review suggested that published studies often overstate evidence strength.

Maintaining Scientific Integrity

The current system appears ill-equipped to handle the influx of AI-generated papers. Reviewers must invest significant time and effort in scrutinizing preregistrations, specification curve analyses, data, and code. A peer-review mechanism that rewards quality reviews is essential.

Global trust in science remains high, which benefits society by promoting truth over popularity or profit. However, AI threatens this ideal. To preserve scientific credibility, incentivizing meaningful peer review is crucial.

This article is adapted from material provided by The Conversation, edited for clarity, style, and length. Mirage.News maintains neutrality and does not endorse any views expressed herein.