The Department of Industry, Science and Resources has released a Proposals Paper for consultation on Introducing mandatory guardrails for AI in high-risk settings, alongside a Voluntary AI Safety Standard (Voluntary Standard) which is already able to be adopted. Both the proposal and the Voluntary Standard set the stage for the likely principles which will underpin Australia’s AI regulatory landscape.
AI and the need for a distinct regulatory approach
At the heart of the Proposals Paper is the acknowledgment that recent AI developments represent a shift in technological capability that transcends conventional regulatory frameworks. The unique characteristics of AI, particularly its capacity for autonomy, opacity, scalability, and adaptability, distinguish it from other forms of technology that can be more easily constrained by traditional legal principles. In particular, General-Purpose AI (GPAI) models1 —those capable of performing a broad range of cognitive functions not limited to specific tasks, such as ChatGPT — pose unprecedented challenges. Such systems are not only inherently dynamic but also capable of functioning with minimal human oversight, raising acute concerns about accountability, transparency, and safety.
The Proposals Paper identifies that, while AI’s potential to enhance economic and social welfare is vast, the risks it presents — especially in sensitive, high-risk sectors such as healthcare, law enforcement, and financial services — are equally profound. Consequently, the paper advances a regulatory proposition grounded in risk-based regulation, which seeks to strike a balance between fostering innovation and pre-emptively addressing potential harms.
The key development put forward by the Proposals Paper would be a set of mandatory guardrails which must be followed in high-risk applications of AI. Released at the same time as the Paper were a broadly consistent set of voluntary guidelines for businesses seeking to adopt best practices today, and in lower risk settings going forward.
Responses to the Proposals Paper are sought by 4 October 2024.
Defining high-risk AI and the scope of regulation
A pivotal aspect of the Proposals Paper is its principles-based approach to defining high-risk AI. This approach eschews rigid categorisations in favour of a flexible, context-dependent analysis of risk. High-risk AI applications, as envisioned by the Proposals Paper, are those with the potential to cause significant harm to individuals or communities, particularly in domains where decisions made by AI systems could impact human rights, public safety, or legal entitlements. For example, AI applications in criminal justice — such as risk assessment algorithms used in parole decisions, or healthcare, where diagnostic tools powered by AI can directly influence treatment outcomes — are understandably classified as high-risk.
In defining high-risk AI, the government has also incorporated GPAI models into the framework, recognising their versatility as both a strength and a vulnerability. GPAI systems capable of performing a wide range of tasks across different sectors, introduce a level of unpredictability that makes them particularly susceptible to unintended misuse. Their ability to autonomously generate content, analyse data, and interact with users across various applications requires guardrails, particularly given the difficulty of anticipating all potential use cases and the associated risks.
The guardrails: A framework for AI accountability
At the core of the Proposals Paper is the introduction of ten mandatory guardrails — a set of measures designed to ensure that AI systems operating in high-risk settings are subject to oversight at every stage of their lifecycle. These guardrails would form the bedrock of the proposed regulatory regime and seek to establish an architecture within which AI can be developed, deployed, and monitored.
Guardrail 1: Accountability process
Organisations would be required to establish, implement, and publicly document an accountability process. This involves clear governance structures, including policies on risk management and data handling, alongside designated roles and responsibilities. The aim is to ensure regulatory compliance and effective oversight, with accountability sitting ultimately at the organisational level.
Guardrail 2: Risk management process
Organisations would be required to implement a risk management process to identify, assess, and mitigate risks arising from high-risk AI systems. This process must consider not only technical risks but also societal impacts, such as bias and discrimination.
Standards such as AS ISO/IEC 23894:2023 (Information Technology – Artificial Intelligence – Guidance on risk management) are pertinent here, as they provide frameworks for integrating AI risk management into broader organisational processes. The aim is to eliminate, or where elimination is not feasible, mitigate foreseeable risks, ensuring these systems do not pose undue harm.
Guardrail 3: Data governance
To address the quality, legality, and security of data, organisations would be required to implement robust data governance measures. This involves ensuring that data used in AI systems is fit for purpose, with any biases and associated mitigations identified, and stored securely.
The focus here includes ensuring compliance with Australian copyright law, Indigenous Cultural and Intellectual Property, and privacy obligations. Transparent data provenance is essential, and this guardrail builds on obligations in the Privacy Act 1988 and the Security of Critical Infrastructure Act 2018.
Guardrail 4: Testing and monitoring
Prior to deployment, AI systems would be required to undergo testing to ensure they meet defined performance metrics, followed by continuous monitoring once deployed. The purpose is to detect any changes in performance, such as model drift, or emergent risks over time.
This is analogous to the post-market monitoring requirements under the EU AI Act and will be guided by standards like ISO/IEC TR 29119-11 and SA TR ISO/IEC 24027, which outline methodologies for evaluating the performance and biases of AI systems.
Guardrail 5: Human oversight
This guardrail ensures that AI systems, particularly in high-risk settings, remain subject to meaningful human oversight. This is crucial where real-time human intervention may not be feasible, but human operators must still be able to review and, where necessary, reverse decisions made by AI systems.
The principle of human oversight serves as a critical safeguard against the risks posed by autonomous AI systems, reinforcing human accountability and ensuring that AI remains an augmentation rather than a replacement for human decision-making.
Guardrail 6: End-user transparency
Organisations would be required to inform end-users when AI is used to make decisions affecting them and ensure that AI-generated content is clearly identifiable. This involves communicating the role of AI in a clear and accessible manner, enabling users to exercise their rights where necessary. Techniques such as content labelling or watermarking of AI-generated outputs may be required to meet this obligation in certain circumstances.
Guardrail 7: Contestability of AI decisions
To safeguard individuals’ rights, organisations would be required to provide mechanisms for people negatively impacted by AI systems to challenge decisions or lodge complaints. Internal complaint handling procedures must be in place, and organisations must ensure that impacted individuals have access to sufficient information to be able to effectively contest decisions.
Guardrail 8: Supply chain transparency
Transparency would be required across the AI supply chain, with developers providing deployers with all relevant information regarding the AI system’s data, design, and risks. Deployers, in turn, would need to provide feedback on any adverse incidents to the developers.
This guardrail addresses the issue of opacity in AI systems, particularly with advanced models where explainability is limited. It mirrors obligations in the EU AI Act, requiring the disclosure of key system characteristics to ensure that risks are appropriately mitigated across the AI lifecycle.
Guardrail 9: Record-keeping
Organisations would be required to maintain comprehensive records about the design, deployment, and performance of AI systems to ensure compliance with the mandatory guardrails. These records would need to be accessible to regulators upon request, ensuring accountability and enabling effective external oversight.
At a high-level these records would include:
- a general description of the AI system;
- design specifications from the development phase, including testing methodology and results;
- a description of datasets used and their provenance;
- assessment of human oversight measures;
- a detailed description of the capabilities and limitations of the AI system; and
- the risk management processes, and mitigation measures implemented.
Guardrail 10: Conformity assessments
Prior to deployment, organisations would need to carry out conformity assessments to certify compliance with the mandatory guardrails. These assessments could be conducted internally, by third parties, or by government authorities, and must be repeated if there are significant changes to the AI system.
This requirement would establish a quality assurance mechanism to ensure that AI systems meet regulatory standards before they are introduced into high-risk settings. The conformity assessment process will be a cornerstone of ensuring compliance.
Alignment between the Voluntary Standard and mandatory guardrails
Unlike the proposed mandatory guardrails, which focus on high-risk AI, the Voluntary Standard is designed to apply to AI systems of any risk level, providing organisations with the opportunity to align their practices in preparation for future regulation. Organisations seeking to follow best practices and minimise risk can benefit from adopting the Voluntary Standard, and by doing so lay a solid foundation to meet the mandatory guardrails once in force.
Key features of the Voluntary Standard:
- Alignment with mandatory guardrails: The Voluntary Standard closely mirrors the proposed mandatory guardrails, particularly the first nine, which address governance, risk management, data protection, testing, transparency, and record-keeping.
- Stakeholder engagement vs conformity assessments: Notably, guardrail 10 of the Voluntary Standard emphasises stakeholder engagement in contrast to the conformity assessments required under the mandatory regime. This underscores the importance of early dialogue with those impacted by AI systems, including consumers, clients, and other key stakeholders.
- Procurement guidance: Recognising that many organisations procure AI systems from third parties rather than develop them in-house, the Voluntary Standard provides guidance to assist organisations in the procurement process, ensuring accountability across the supply chain.
Regulatory models: Adapting existing frameworks or creating a new legal regime?
In order to implement the mandatory guardrails, the Proposals Paper offers three distinct options, each reflecting a different approach to integrating AI regulation into the existing legal landscape. These options warrant close consideration, as each presents unique challenges and opportunities.
- Option 1 – Domain-specific approach: This approach suggests adapting existing regulatory frameworks in specific sectors, such as healthcare, finance, and law enforcement, to incorporate AI-specific guardrails. This method has the advantage of leveraging existing regulatory bodies’ expertise and frameworks, reducing the burden of creating new oversight structures from scratch. However, it risks fragmenting AI regulation across sectors, leading to regulatory arbitrage, where companies might exploit gaps or inconsistencies between sectors to avoid stringent oversight.
- Option 2 – Framework legislation: A framework approach would introduce overarching AI legislation that applies across all sectors but leaves room for sector-specific amendments. This model balances flexibility with consistency, ensuring that all AI systems are subject to the same baseline standards, while allowing for bespoke modifications tailored to the needs of particular industries. This approach would likely foster greater harmonisation between sectors and ensure that high-risk AI applications are treated with consistent levels of scrutiny.
- Option 3 – Whole-of-economy AI Act: The most ambitious of the proposed models is a comprehensive AI Act — a stand-alone piece of legislation that governs all AI applications across the economy. This model mirrors the EU AI Act, providing a centralised legal framework that clearly defines the boundaries of AI use, sets out mandatory obligations for developers and deployers, and establishes enforcement mechanisms. Such an approach ensures consistency and clarity but would require significant legislative reform and the creation of a new regulatory body, akin to the eSafety Commissioner, to oversee AI governance.
The international context: Aligning with global AI regulation
As the paper observes, AI is a global phenomenon, and any regulatory framework must be interoperable with international standards to be effective. Australia’s proposals are closely aligned with global initiatives, particularly the EU’s AI Act, the UK’s AI Bill, and Canada’s Artificial Intelligence and Data Act. This alignment reflects a broader trend toward international harmonisation, recognising that AI technologies, by their very nature, transcend national borders.
The Proposals Paper also highlights the importance of considering Indigenous Cultural and Intellectual Property in the context of AI regulation. This reflects a uniquely Australian dimension to the regulatory debate, one that seeks to protect Indigenous knowledge and cultural expressions from exploitation by AI systems. While this issue is not as prominent in other jurisdictions, its inclusion in the Australian framework underscores the government’s commitment to ensuring that AI development respects local cultural and ethical norms.
Toward a balanced regulatory future for AI
The Proposals Paper is a critical step in Australia’s journey toward a comprehensive AI regulatory framework. By introducing mandatory guardrails for high-risk AI systems and offering a clear path forward through three regulatory options, the paper lays the groundwork for a legal architecture that both encourages innovation and mitigates risk.
However, as the paper itself acknowledges, the path forward is fraught with challenges. The complexity of AI technologies, their cross-sectoral applications, and the speed at which they are developing require a regulatory framework that is not only flexible but also proactive in addressing emerging risks. The government’s focus on risk-based regulation, built around testing, transparency, accountability, and consistent development with other jurisdictions, provides a solid foundation for this framework.
As the consultation process unfolds, it will be crucial for stakeholders across industry, academia, and society to engage with these proposals to ensure that Australia’s AI regulation is both forward-looking and fit for purpose in an era where AI’s influence will only continue to grow. The proposals represent not just a regulatory response to current challenges but also a blueprint for a legal regime capable of evolving alongside the technologies it seeks to govern.
This article was written by Daniel Kiley, Partner and Christopher Power, Law Graduate.
1Not to be confused with ‘Artificial General Intelligence’ or ‘AGI’, a hypothetical future level of AI system which rivals human intelligence