Skip to content

Can I speak with a human please? Preparing for the new automated decision-making disclosure requirements

Market Insights

From 10 December 2026, the Privacy Act 1988 (Cth) (Privacy Act) will require organisations to disclose in their privacy policies whether, and how, they use automated decision‑making (ADM) technologies in decisions that may have significant consequences for individuals. These reforms were introduced under the Privacy and Other Legislation Amendment Act 2024 (Cth) (POLA Act).

The rules define ADM broadly. Many routine software‑enabled processes— even where the software exercises no independent or autonomous judgment — may fall within scope. The growing capability and adoption of frontier AI tools also has the potential to further expand the types of decisions captured.

This article discusses some of the key takeaways for organisations looking to understand the scope of the new rules and update their privacy policies ahead of the 10 December deadline.

Overview of the requirements

The ADM disclosure rules apply to organisations that use ADM technology to replace (or substantially assist) human decision-makers on decisions that could “reasonably be expected to significantly affect the rights or interests of an individual” – for example, decisions impacting an individual’s employment, rights / entitlements under a contract, or access to essential services.

In summary, organisations that use ADM technology in this manner must update their privacy policies to explain:

  • what kinds of decisions are being automated or substantially assisted by ADM systems; and
  • what kinds of personal information are being provided to those systems to make (or substantially assist a human decision-maker to make) these decisions.

A more detailed discussion of the key terminology and concepts can be found in our previous article.

Takeaway 1: “ADM” covers a broad range of technologies; it includes (but isn’t just limited to) AI

Throughout consultation on the POLA, many stakeholders commented on the breadth of the new rules and the wide range of technologies that may potentially be captured.

While it is well understood that the new rules apply to fully autonomous decision-making technologies (where the decision-making process is essentially handed over to the software), the rules can also extent to software performing any component or stage of the decision-making process, provided that component or stage is substantially and directly related to the final decision.

Importantly, the software need not exercise independent judgment or autonomy. Software that executes human‑defined rules, workflows or business processes may be captured if it performs tasks within the decision-making chain that are substantially and directly related to the final decision. This recognises that even simple forms of automation can pose risks for the individuals affected by those decisions, due to the mechanical application of rules and the reduced level of direct human oversight of individual cases.

Many organisations already use software in this way to improve efficiency, to implement business processes and rules at scale, and to improve accuracy and consistency in their application.

This can include common, and seemingly “mundane”, examples such as:

  • a human decision-maker using Excel to generate a score about an individual (using criteria selected by the decision-maker and programmed into the software), which is then used as a key factor in the final decision; and
  • automating business workflows using pre-defined business logic (for example, programming software to automatically issue notices or warnings to a customer when trigger conditions are met).

The new rules aim to encourage organisations to think critically about how software is embedded in their material decision-making processes, and to ensure that appropriate transparency is provided to the public about ADM usage in areas that may have a significant impact on individuals (whether beneficial or adverse).

The explanatory memorandum clarifies that the focus of these new rules is the use of software to facilitate decision-making. The use of software “for purposes other than facilitating the decision-making (such as using a word processing program to document a decision), this will not be captured” (emphasis added).

Takeaway 2: GDPR-aligned compliance models may not be sufficient

The scope of the new ADM rules is notably broader than Article 22 of the GDPR, which concerns decisions based solely on automated processing.1

Organisations which have GDPR-based compliance models should not assume alignment, as the Privacy Act’s ADM disclosure rules will likely capture a broader range of use cases. Existing audits or assurance processes based on GDPR standards may need to be supplemented to meet the Privacy Act’s broader disclosure requirements.

Takeaway 3: Shadow IT (including shadow AI) remains a significant challenge

The ADM disclosure rules apply regardless of whether the relevant software is owned or operated by the organisation itself, or by a third party (such as a cloud service provider).

The problem of “shadow IT” — the use by staff of software, IT hardware or cloud services without the knowledge or approval of the organisation’s IT department — is well understood to create data governance and cyber security risks. The rapid uptake of commercially-available AI platforms (which can typically be accessed via browsers and mobile applications) has further exacerbated these risks.

The new ADM disclosure rules provide a further impetus for organisations to implement appropriate controls to maintain visibility of what technology its staff are using, and to mitigate the risk of “shadow IT”.

Under Australian Privacy Principle (APP) 1.2, organisations must implement practices, procedures and systems to ensure compliance with the APPs. This will now include ensuring compliance with the new ADM disclosure requirements (to be incorporated into APP 1.7).

Takeaway 4: Disclosures should focus on transparency and accountability

The OAIC has stated that it will publish detailed guidance on the ADM disclosure requirements later this year, which we anticipate will provide further information about the OAIC’s expectations regarding the level of detail that should be included in privacy policies.

In the interim, the OAIC has referred Australian organisations to the POLA’s explanatory memoranda (EM) for guidance.

The EM explains that the ADM disclosure rule is “intended to increase transparency about the use of personal information in the operation of computer programs which solely make decisions, or which substantially and directly make decisions, that could reasonably be expected to significantly affect individuals’ rights or interests. The obligation on entities to include information in their privacy policy about the kinds of personal information used in such computer programs and the kinds of such decisions is not expected to include commercial-in-confidence information about automated decision-making systems.

Consistent with the OAIC’s existing guidance on APP 1, we anticipate that the OAIC will not expect a level of disclosure that would compromise the effectiveness of ADM systems which are used for threat detection or cyber security purposes.

Notably, the ADM disclosure rules capture any use of ADM systems that may have a significant impact on the rights or interests of individuals – whether that impact is beneficial or adverse. This encourages organisations to adopt a broad lens when investigating its current ADM footprint and may also inform the way in which those ADM use cases are disclosed in the privacy policy.

While the current focus of the ADM disclosure rules is on transparency and accountability, the Government has indicated that the upcoming second tranche of Privacy Act reforms will likely introduce a new right for individuals to “request meaningful information about how substantially automated decisions with legal or similarly significant effect are made“. It remains to be seen whether this will influence the OAIC’s expectations regarding the level of detail required in privacy policies.

Takeaway 5: Barriers to enforcement have been lowered

The POLA Act has also provided the OAIC with new enforcement powers (see our previous article), including the ability to issue infringement notices for non-compliant privacy policies up to a maximum penalty of 200 penalty units (currently $66,000) per contravention.

The OAIC has signalled that it will take an active stance on enforcing the rules for privacy policies. In 2026 alone, the OAIC has already:

  • conducted a targeted compliance sweep of privacy policies in selected industries2; and
  • published a review of automated decision-making transparency by Australian Government agencies under existing FOI rules.3 The review methodology included investigating external evidence of undisclosed ADM use (e.g. media articles, independent reviews, submissions or reports).

How can you prepare?

In order to comply with the ADM disclosure rules, organisations need to have visibility on two key issues:

  • what areas of the business are required to make decisions that significantly impact the rights or interests of individuals; and
  • how technology is deployed across the enterprise and incorporated into decision-making processes. (Where the enterprise delegates decision-making responsibilities to agents or outsourced service providers, it would be prudent to maintain visibility of ADM usage by these third parties as well.)

Some organisations may be able to leverage existing governance structures and reporting lines to obtain this information, while other organisations may need to conduct more specific inquiries.

Certain business functions (such as hiring, staff disciplinary processes, price-setting and complaints / claims management) are more likely to be utilising ADM in ways that trigger the disclosure requirements, but a holistic enterprise‑wide view is essential. The privacy policy must address the organisation’s operations as a whole.

Forthcoming OAIC guidance may provide further examples of common ADM use cases, and a more detailed explanation of the OAIC’s views on the “significant impact” threshold. This may assist organisations to prioritise their efforts.

More broadly, the ADM disclosure rules reinforce the importance of investment in appropriate technology and data governance initiatives to mitigate risks such as “shadow IT”, promote AI safety outcomes, and maintain appropriate visibility over key aspects of their technology footprint.

These initiatives may include:

  • reviewing and strengthening IT acceptable use policies;
  • preparing AI governance frameworks and policies;
  • privacy, cyber security and AI training for staff; and
  • reworking the governance frameworks for technology procurement and business process design to account for these additional considerations.

Investment in these areas will support compliance now and position organisations for future privacy and AI regulatory developments.

Please reach out to the authors or a member of the HWLE Lawyers’ Privacy, Data Protection and Cyber Security team if you’d like to discuss how we can assist your organisation with any of the matters in this article.

This article was written by Matthew Craven, Partner and Tim Lee, Special Counsel.


1 Article 22 gives data subjects the right “not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her“.

2 https://www.oaic.gov.au/news/media-centre/privacy-compliance-sweep-to-put-privacy-policies-under-the-spotlight.

3 https://www.oaic.gov.au/freedom-of-information/information-commissioner-decisions-and-reports/foi-reports/Automated-decision-making-and-public-reporting-under-the-Freedom-of-Information-Act.

Important Disclaimer: The material contained in this publication is of general nature only and is based on the law as of the date of publication. It is not, nor is intended to be legal advice. If you wish to take any action based on the content of this publication we recommend that you seek professional advice.

Subscribe for publications + events

HWLE regularly publishes articles and newsletters to keep our clients up to date on the latest legal developments and what this means for your business. To receive these updates via email, please complete the subscription form and indicate which areas of law you would like to receive information on.

* indicates required fields

This field is for validation purposes and should be left unchanged.
Interests **
This field is hidden when viewing the form
Email preferences*
What type of content would you like to receive from us?