4 December, 2025
government-urges-ai-content-watermarking-to-combat-misinformation

Artificial intelligence developers have been urged to “watermark” AI-generated content, ensuring it is clearly identifiable, as the Australian government intensifies efforts to prevent the misuse of technology that could mislead or harm the public. This recommendation comes amidst growing concerns over the potential for AI to create deceptive content, commonly referred to as deepfakes.

In a move to enhance transparency, the federal government has issued guidance suggesting that AI content should be marked with labels indicating its origin or embedded with information to trace its source. This process, known as watermarking, is considered more resistant to tampering compared to simple labels. Currently, there is no legal requirement to label AI-generated content, which has led to confusion between authentic and AI-created materials.

Transparency and Trust in AI

Industry Minister Tim Ayres emphasized the importance of transparency in AI usage, stating, “AI is here to stay. By being transparent about when and how it is used, we can ensure the community benefits from innovation without sacrificing trust.” He further advocated for businesses to adopt the government’s guidance, highlighting the need to build trust, protect integrity, and instill confidence in the content consumed by Australians.

Some tech companies, such as Google, have already begun watermarking their AI content. However, the rapid proliferation of generative AI has sparked fears that the technology could be exploited for fraud, misinformation, or blackmail by producing convincing fake content that misrepresents individuals.

Legislative and Regulatory Developments

The eSafety Commission has reported that deepfake image-based abuse is occurring at least once a week in Australian schools, underscoring the urgency of addressing this issue. In response, independent senator David Pocock introduced a private senator’s bill aimed at prohibiting the use of digitally altered or AI-generated content depicting an individual’s face or voice without consent. Senator Pocock criticized the federal government for its slow response, noting that it has been over two years since the review into responsible AI began.

Looking ahead, the government is preparing to release a National AI Plan, which will serve as the culmination of years of consultation. This plan is expected to introduce “mandatory guardrails” designed to mitigate the most severe impacts of AI. The plan will also address ideas from the government’s productivity roundtable held in August, where AI was a focal point in discussions on economic growth and wage increases.

Balancing Innovation and Regulation

While the government seeks to balance the risks associated with AI against its economic potential, the Productivity Commission has cautioned against implementing mandatory guardrails too hastily. During the productivity roundtable, the commission warned that such regulations could stifle a $116 billion economic opportunity, advocating instead for a pause in legislative action until legal gaps are thoroughly identified.

Despite these economic considerations, the government remains focused on addressing public safety concerns. Senator Ayres recently announced the establishment of an AI Safety Institute, which will monitor and respond to “AI-related risks” and work towards building public trust in AI technologies.

Former Industry Minister Ed Husic, who initiated the consultations on a federal response to AI growth, has called for a dedicated AI Act. This legislation could provide a flexible framework to adapt to the evolving nature of AI technology, ensuring that regulatory measures remain effective as the landscape changes.

The announcement of these measures represents a significant step in the government’s approach to managing AI’s impact on society. As the National AI Plan is set to be unveiled, stakeholders across various sectors will be keenly watching to see how these guidelines and potential legislative actions shape the future of AI in Australia.