THE PRACTITIONER’S COMPANION
Tuesday 2 December 2025

National AI plan must move fast to protect Australians

A national AI road map "uses all the right language", but experts are questioning when the Albanese government's plan will be followed by concrete action.

Published December 2, 2025 3 min read
The government has released an artificial intelligence plan but isn't setting up standalone laws.

AUSTRALIANS are being promised better protection against scams and AI-generated abuse.

The release of the federal government’s National AI Plan comes after it said artificial intelligence would become a national priority as it consulted on copyright law changes.

Key parts of the plan are about reskilling and supporting workers impacted by AI, boosting investment in data centres and sharing productivity benefits across the economy as well as criminalising technology-facilitated abuse, such as deepfakes.

“This plan is focused on capturing the economic opportunities of AI, sharing the benefits broadly, and keeping Australians safe as technology evolves,” Industry Minister Tim Ayres said.

Nicholas Davis from the University of Technology Sydney’s Human Technology Institute welcomed the plan, saying it “uses all the right language”.

But he warned Australians could remain exposed to AI-related harms without action on implementing reform.

“(The plan) does a really good job of outlining the direction and commitments that are needed … and puts workers and communities at its centre,” Professor Davis told AAP. 

“The challenge now is to move urgently from commitment to action, particularly on critical areas such as privacy reform.”

Labor rejected establishing standalone AI legislation, an approach pushed by former minister Ed Husic, with existing legislation to cover the evolving technology. 

“The government is monitoring the development and deployment of AI and will respond to challenges as they arise, and as our understanding of the strengths and limitations of AI evolves,” the plan states.

The government has pledged $29.9 million to establish an AI Safety Institute in 2026 to ensure monitoring and for responding to AI risks.

ACTU assistant secretary Joseph Mitchell said the AI Safety Institute would play an important role in holding tech companies accountable for products they were developing and ensuring they complied with Australian laws.

Prof Davis said most Australians were unaware of how often algorithms shaped prices, loans, services, or what they saw online, which made consumer protections outlined in the plan essential.

“A lot of the decision-making is invisible. You might not even know your data was collected, let alone used to deny you a service or charge you more,” he said.

He compared the shift to mid-20th-century product liability reforms, which stopped companies selling unsafe goods without consequence.

“That chain of liability is even more important in an AI-driven world,” he said.

“If people feel they’re being ripped off or manipulated, trust collapses, and Australians are already some of the least trusting in the world.”

The plan was heralded as a significant step forward in protecting children, as it will strengthen national capability to detect and respond to harmful AI systems including those used to generate abusive content involving kids.

“Children are already growing up in an AI-enabled world,” International Centre for Missing and Exploited Children Australia’s head of government affairs Dannielle Kelly said.

“Our job is to make sure they can do that safely, not by shutting down innovation, but by putting clear guardrails and strong regulation in place.”

UNSW AI Institute director Sue Keay said the plan seemed like a “belated acknowledgement that Australia should probably start paying attention to this AI stuff”.

The framework listed everything the government should be doing, but failed to commit to any real investment or sense of urgency, Dr Keay said. 

Other News