Federal Innovation Minister Tim Ayres announced the creation of the Australian AI Safety Institute on Tuesday that will be responsible for co-ordinating the nation's response to emerging AI technologies.
Organisations including Microsoft, trade unions and academics welcomed the move, which they said could help to protect Australian workers from creative theft and the loss of jobs.
But the industry is still waiting for the government to release a national AI policy, due before the end of 2025, and a study from Adobe found government departments were still unprepared for the AI revolution.
The Australian AI Safety Institute will be charged with evaluating AI innovations and recommending legal changes where needed, Mr Ayres said, as well as ensuring companies comply with relevant laws.
The institute will also provide technical assessments and work with international organisations to streamline a global approach to AI risks and opportunities, he said.
"Adopted properly and safely, AI can revitalise industry, boost productivity and lift the living standards of all Australians," Mr Ayres said.
"But there are two sides to this coin. While the opportunities are immense, we need to make sure we are keeping Australians safe from any malign uses of AI."
The institute would be key to identifying areas for greater AI regulation, Technology Assistant Minister Andrew Charlton said, and would work directly with government agencies.
"The Institute will help identify future risks, enabling the government to respond to ensure fit-for-purpose protections for Australians," he said.
Independent and expert advice would be critical to creating effective AI rules, Microsoft Australia national security officer Mark Anderson said, and Australian Council of Trade Unions assistant secretary Joseph Mitchell said the institute could help to shut down bad-faith use of the technology.
"Too many livelihoods have been stolen in the rapid development of these models," Mr Mitchell said.
"The first step in sharing the benefits is protecting against the potential harms."
The news emerged as Adobe released findings from a study that found Australian government departments were not ready for citizens to embrace AI in large numbers, and could unwittingly spread out-of-date and misleading information.
Its fourth annual Digital Government Index analysed digital readiness across 115 government departments globally and found Australia's government digital services improved by 2.5 per cent in 2025, although digital maturity ranked as basic.
Australia's AI readiness across all government agencies also ranked in the basic category, with a score of 61.7, and Adobe Asia Pacific digital strategy director John Mackenney said the sector deserved greater attention and investment.
"Where we see a gap ... is in how government is represented within AI," he told AAP.
"While it's great that we have these new tools, there's certainly significant risk for both the government and citizens (about) whether we're getting the right information."
Rather than ranking government websites on their use of AI, the readiness score referred to the websites' ability to share information with popular AI tools such as OpenAI's ChatGPT, Google Gemini and Microsoft Copilot.