Australia’s Department of Health Releases AI Transparency Statement for Safer Healthcare

Image Credit: Tyler Duston | Splash

The Australian Government Department of Health and Aged Care released its Artificial Intelligence Transparency Statement in May 2025, outlining its commitment to using AI safely and responsibly in healthcare. Mandated by the Digital Transformation Agency’s (DTA) Policy for the Responsible Use of AI in Government, effective September 1, 2024, the statement aims to foster public trust by detailing ethical AI adoption.

Context and Objectives

The statement aligns with Australia’s efforts to ensure ethical AI use across public sectors, following the 2023 Safe and Responsible AI consultation by the Department of Industry, Science and Resources. This consultation emphasized stronger governance to address regulatory gaps and build public confidence. The Department of Health’s statement supports these goals, adhering to the DTA’s policy and proposed mandatory guardrails for high-risk AI settings, with a designated official overseeing compliance.

AI Strategy and Commitments

The Department of Health plans to use AI for low-risk applications, such as data analytics and administrative tasks, to enhance healthcare operations. All AI processes will include human oversight to ensure accountability and minimize risks. No AI applications currently affect the public without human intervention. The department is developing internal AI policies aligned with DTA guidelines and commits to the Voluntary AI Safety Standard, which includes 10 guardrails like transparency and risk management.

Benefits of the Approach

The transparency statement promotes public confidence by clearly outlining the department’s cautious approach to AI, prioritizing human oversight to prevent errors or ethical concerns. AI’s potential to improve healthcare efficiency, such as through faster data analysis, could enhance service delivery. Early publication of the statement reflects proactive governance, supporting Australia’s broader efforts to standardize ethical AI use.

Challenges and Risks

The statement’s lack of specific AI applications reflects the department’s early stage of adoption, which may limit public clarity on its plans. Developing internal policies introduces uncertainty, as their effectiveness is yet to be tested. Broader risks, including data privacy and algorithmic bias in healthcare AI, remain concerns, particularly for future high-risk applications. Collaboration with regulatory bodies like the Therapeutic Goods Administration (TGA) aims to mitigate these risks.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Machine Unlearning: How AI Models Forget Data for Privacy, Compliance, and Fairness

Next
Next

Hong Kong Climate Forum 2025 Showcases AI’s Role in Carbon Neutrality and Green Innovation