19 August, 2025
australian-law-chief-urges-comprehensive-ai-impact-assessment

The head of the Australian Law Reform Commission (ALRC) has issued a stark warning about the current debate over artificial intelligence (AI) regulation, suggesting that without a comprehensive assessment of AI’s impacts, efforts to regulate the technology will be futile. Federal Court Judge and president of the ALRC, Mordy Bromberg, emphasized the need for a thorough evaluation of AI’s effects during a speech at the Australian Law Forum.

Bromberg highlighted the transformative potential of AI, noting its capacity to influence human behavior on an unprecedented scale. He argued that every law seeks to regulate human behavior, and therefore, AI’s pervasive nature will likely impact most legal frameworks. This, he suggests, requires a more profound consideration than the current focus on superficial regulatory measures.

The Need for a Comprehensive Approach

In his address, Bromberg underscored the risks associated with AI, particularly as it becomes a dominant force in purchasing, transactions, and information dissemination. He advocated for a detailed examination of AI’s unique properties, the allocation of responsibility for its risks, and whether a preventive or remedial approach to potential damages is more suitable. According to Bromberg, this represents “the largest and most complex law reform exercise that currently confronts Australia.”

Despite his caution, Bromberg is not opposed to AI. He acknowledged its potential to transform the legal profession positively, suggesting that AI could enhance legal services and improve productivity. However, he warned against conflating AI’s intelligence with the wisdom required in complex legal cases.

“Wisdom is a skill of a greater degree of order than mere intelligence. It is far harder to be wise than it is to be smart,” Bromberg stated, emphasizing that wisdom is derived from human experiences that AI cannot replicate.

Current Debate: Economic Focus and Historical Parallels

The current discourse around AI regulation is largely dominated by economic perspectives. Industry advocates often tout exaggerated productivity benefits, while trade unions express concerns about job losses. This economic focus is reminiscent of the 2008 regulatory discussions on social media, which overlooked significant issues like disinformation and mental health impacts.

As AI evolves, similar risks are emerging, particularly concerning mental health. The rapid development of AI parallels ongoing challenges in areas like privacy, where legal reforms have lagged. The historical difficulty in predicting new media’s societal impact underscores the need for a cautious and comprehensive approach to AI regulation.

Looking Forward: The Case for a Stocktake

Bromberg’s call for a basic stocktake of AI’s impacts is crucial. Without it, he warns, regulatory efforts may become a futile exercise in “chasing a tail that can never be caught.” As with social media, the benefits and harms of AI are still largely speculative, and a thorough assessment could provide a clearer regulatory path.

In conclusion, while AI holds the promise of significant advancements, its potential risks necessitate a careful and comprehensive approach to regulation. Bromberg’s insights highlight the importance of wisdom over mere intelligence in crafting policies that will shape the future of AI in Australia and beyond.