Tim could not believe the headline when he read it. Even more so because it was his own work. The former Australian Community Media (ACM) journalist, whose name has been changed to protect his employment prospects, discovered that an internal generative artificial intelligence (AI) model had produced the headline for his news article, which was set to be published in the next day’s printed newspaper. According to Tim, the headline was legally problematic.
“It had generated something false from the story,” he said. He caught the error just in time, but wondered what might have happened had he not spotted it. “It made me feel frustrated and a little bit anxious because I wondered what else could have been possibly published in print that had gone unchecked.”
AI’s Role in Newsrooms Under Scrutiny
ACM reporter Terri, whose name has been changed due to fear of job repercussions, shared a similar concern. She was given legal advice about a news story from the AI model, which she found troubling. The legal guidance, she said, had “logic and thinking” but greatly overstated the legal risks the story might pose. “It was not right,” she said. “The AI returned a lot of information saying that the story posed a defamation risk [and] going through what it returned, I don’t think it was correct.”
Terri decided not to follow the AI’s legal advice, raising concerns about the precedent it set. “There’s a reason why you have paid professionals who have to do a lot of schooling to get the qualifications to make those calls,” she emphasized.
Generative AI ‘Experiments and Testing’
In a leaked email to staff on October 3, seen by the ABC, ACM management mentioned that “AI experiments and testing” were underway in its newsrooms, including story editing and coaching, headline writing, and generating story ideas. However, the ABC understands that the technology is also being used to analyze a news story’s legal risk. The generative AI model in question is Google’s Gemini, adapted for ACM so that data on the platform will not be shared with Google.
Media, Entertainment and Arts Alliance (MEAA) director Cassie Derrick reported that union members within ACM claimed a directive had been issued to use Gemini for “all aspects of reporting.” “In those instances, Gemini is making things up, misattributing some pretty important facts like charges in court, and generally looking to replace journalists and the ethical practice of those journalists with a more unethical software,” she claimed.
“Gemini has attributed charges to the wrong person,” Derrick said. “Not only for the journalist, but also for the person who had been wrongly accused.”
Fears of Job Losses and Ethical Concerns
ACM employee Sam, whose name has also been changed for anonymity, expressed fears that the technology could justify job cuts. “AI won’t completely fill the hole that’s been left behind by the people who have left.” Some ACM reporters told the ABC they refused to use the technology. ACM cut 35 jobs last year, blaming Facebook’s parent company, Meta, for withdrawing funds for regional journalism it had previously provided due to the News Media Bargaining Code.
Sam noted that while AI use in his newsroom was limited, the potential for increased reliance was concerning. “There is that fear [that] if we were to become increasingly reliant on it [AI], are we going to be churning out stuff that has mistakes?” he questioned.
AI Not a Replacement for Journalists
The ABC has found no evidence that any factual errors or legally problematic information generated by AI technology have been published in print or online by ACM. In response to questions from the ABC, an ACM spokesperson stated that the assertions about how generative AI was used in its newsrooms were “flawed.”
“We do not use Gemini to write stories or rely on it for legal advice,” the spokesperson said. “Humans make the decisions on every word we publish. ACM is cautiously, carefully, and openly exploring tools that can help us to better serve our communities. We will keep listening to our teams, providing information and training, and driving responsible innovation that supports our journalism.”
AI Use ‘Widespread’ in Journalism
RMIT University AI and media expert TJ Thomson noted that the technology’s use in newsrooms is growing. “I wouldn’t say that these issues or these fears are unique to ACM outlets. I think they’re very widespread,” Dr. Thomson said. “From behind-the-scenes uses to more public-facing uses, it is becoming more and more prevalent.”
However, he cautioned that seeking legal advice from AI could be particularly problematic due to geographic bias. “These models that they’re turning to have been trained primarily with information scraped from the World Wide Web, a lot of it from North America, which is a very different legal context,” he explained.
The ABC has introduced an internal generative AI, called ABC Assist, which helps journalists search for ABC archival information, summarize old content, and draft interview questions, among other things. “In the news division, all uses of AI tools in the production of audience-facing content must be referred to an editorial manager,” the ABC’s editorial guidance note states.
The New York Times has also updated its website information about how its journalists use AI to analyze data and test headlines. “We don’t use AI to write articles, and journalists are ultimately responsible for everything that we publish,” the New York Times stated.
In Australia, News Corp has been hiring AI engineers to join an editorial team, signaling a broader trend in the industry. Google declined to comment on the situation.