AI in Australian Healthcare: What the Regulations Say
Healthcare AI is moving fast. The regulatory landscape? Less so, though Australia has made more progress than most countries in establishing a framework that’s both practical and protective. If you’re running a clinic and considering AI tools, understanding the regulatory environment isn’t optional.
The TGA’s Role
The Therapeutic Goods Administration is the primary body overseeing medical devices in Australia, and many AI-powered healthcare tools fall under its jurisdiction. The key question: is the AI tool a medical device?
If an AI system is intended to diagnose, prevent, monitor, treat, or alleviate a disease, it’s likely a medical device under the Therapeutic Goods Act 1989. That means inclusion in the Australian Register of Therapeutic Goods (ARTG) before it can be lawfully supplied.
For sleep medicine, the implications are direct. An AI algorithm that scores polysomnography data and generates diagnostic reports? Almost certainly a medical device. An AI tool that manages appointment scheduling? Probably not. The distinction hinges on clinical intent and potential for patient harm.
The TGA uses a risk-based classification system. Most AI diagnostic tools fall into Class IIa or IIb, requiring conformity assessment and quality management certification.
Software as a Medical Device (SaMD)
The TGA adopted the International Medical Device Regulators Forum (IMDRF) framework for SaMD classification in 2021. It considers two dimensions: the seriousness of the healthcare situation and the significance of the information provided to the healthcare decision.
For clinic operators, the practical takeaway: before adopting any AI tool that touches clinical decision-making, verify its ARTG status. If the vendor can’t confirm regulatory compliance, that’s a red flag.
Privacy Law and AI
The Privacy Act 1988, administered by the Office of the Australian Information Commissioner, governs how health information is collected, used, disclosed, and stored. Several Australian Privacy Principles become particularly relevant when AI processes patient data:
APP 3 — Collection. Health information can only be collected when reasonably necessary. An AI tool collecting more data than it needs is a compliance issue.
APP 6 — Use and disclosure. Information collected for one purpose generally can’t be repurposed without consent. An AI scheduling tool can’t redirect patient data to marketing analytics.
APP 8 — Cross-border disclosure. If the AI tool processes data overseas, the organisation remains accountable for equivalent privacy protection.
APP 11 — Security. Organisations must protect health information from misuse, interference, and unauthorised access. For AI systems, that means encryption, access controls, and regular security assessments.
The Notifiable Data Breaches scheme adds another layer — if patient data processed by an AI system is compromised, the clinic must notify both the OAIC and affected individuals.
The AI Ethics Framework
The Australian Government’s AI Ethics Framework provides voluntary principles — fairness, accountability, transparency, and human oversight — that increasingly inform procurement decisions. The Australian Digital Health Agency has also published guidance on AI adoption in healthcare settings.
For clinics working with specialists in this space, these ethical frameworks help structure implementation decisions and evaluate vendor claims about AI capabilities.
What Clinics Should Do Now
Audit your current AI tools. You might already be using AI without realising it — many practice management systems incorporate AI features.
Check ARTG registration. For any tool functioning as a medical device, confirm it’s on the register.
Review vendor data handling. Where is data stored? Who has access? Is there a compliant data processing agreement?
Maintain human oversight. No AI tool should operate without human verification of critical outputs. Automated sleep study scoring should be reviewed by a qualified sleep scientist.
Document consent. Ensure your consent processes explicitly cover AI processing of patient data.
The Direction of Travel
Australia’s regulatory approach will continue to evolve. The TGA has signalled interest in specific guidance for AI/ML-based medical devices, particularly around algorithms that update over time. Traditional regulation assumes a fixed product; adaptive AI doesn’t fit that model neatly.
The regulatory landscape isn’t a barrier to AI adoption in healthcare. It’s a guardrail. And guardrails exist because the road is worth driving on.