The Privacy and Other Legislation Amendment Bill 2024 (Bill) proposes new transparency requirements for businesses and government agencies where:
- the organisation uses a “computer program” to either:
- perform decision-making functions in a fully automated manner (without a human decision-maker); or
- substantially and directly assist human staff to make decisions;
- the computer program uses personal information about that individual to perform its function (ie to either make the decision, or to assist the human decision-maker); and
- the decision could “reasonably be expected to significantly affect the rights or interests of an individual“.
While these reforms have obvious implications for emerging technologies such as autonomous AI agents, they also have the potential to capture a broad range of simpler automation use cases that are already widely used, such as:
- software that assesses input data against pre-defined objective criteria and then applies business rules based on those criteria (eg whether to approve or reject an application);
- software that processes data to generate evaluative ratings or scorecards, which are then used by human decision makers (eg predictive analytics); and
- robotic process automation (which uses software to replace human operators for simple and repetitive rule-based tasks, such as data entry, data extraction and form filing).
In addition to these new privacy policy requirements, the Bill also proposes reforms in various other areas associated with privacy law in Australia. For an overview of the Bill and access to our other articles on the reforms, please click here.
What types of technology are captured?
The new rules are defined broadly to capture any type of “computer program” as long as the decision that is being automated (or assisted) meets the materiality threshold (ie it could “reasonably be expected to significantly affect the rights or interests of an individual“).
The term “computer program” is not defined. The explanatory memorandum explains that the new rules are intended to “encompass a broad range of matters, including pre-programmed rule-based processes, artificial intelligence and machine learning processes“.
The memorandum also gives the example of “(using) Microsoft Excel…to generate a score about an individual that (is) a key factor in a human-decision maker making the decision” as an example of a use case that may meet the materiality threshold.
Given the wide ambit of the proposed rules, businesses and government agencies will need to think broadly about how technology is being incorporated in (and used to support) existing business processes and workflows to identify relevant uses cases and assess them against the materiality threshold.
The Robodebt scheme (which the Government has acknowledged is one of the drivers for these new rules) provides a stark example of how even relatively simple forms of automation (without any artificial intelligence component) can have significant and adverse impacts for individuals when deployed at scale and in high-risk settings.
When will a decision “significantly affect the rights or interests of an individual”?
Whether a decision could “reasonably be expected to significantly affect the rights or interests of an individual” will be assessed on a case-by-case basis, depending on the individual’s circumstances.
The impact can be either adverse or beneficial, but the effects must be “more than trivial” and must have the potential to “significantly influence the circumstances of the individual“. The significance of a decision may be amplified where that decision impacts a member of a vulnerable group (such as a child or a person with disabilities).
The term “interests” is not defined, and leaves scope for consideration of a broader range of factors when assessing the overall impact of a decision on the individual.
The Bill provides a non-exhaustive list of examples of the kinds of decisions that may meet this threshold including:
- decisions made under an Act as to whether to grant a benefit to an individual;
- decisions that affect the individual’s rights under a contract, agreement or arrangement (eg a decision regarding a life insurance policy); and
- decisions that affect the individual’s access to a significant service or support (eg differential pricing for, or decisions on whether to offer, healthcare services).
It is intended that this list will be supplemented by further guidance from the Office of the Australian Information Commissioner (OAIC).
In the interim, another potentially useful source of guidance is the Department of Industry, Science and Resources’ Proposals Paper for Introducing mandatory guardrails for AI in high-risk settings (see our article on the Proposals Paper here).
The Proposals Paper provides the following examples of artificial intelligence use cases that have been identified as “high-risk” in other countries due to their potential impact on individuals (although it is important to note that artificial intelligence is only one form of automation technology).
Domain area | General description |
---|---|
Biometrics | AI systems used to identify or categorise individuals, assess behaviour or mental state, or monitor and influence emotions. |
Education/Training | AI systems used in determining admission to education programs, evaluating learning outcomes or monitoring student behaviour. |
Employment | AI systems in employment matters including recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination. |
Access to essential public services and products | AI systems used to determine access and type of services to be provided to individuals, including healthcare, social security benefits and emerging services. |
Access to essential private services | AI systems used to make decisions that affect access to essential private services, including credit, insurance in a manner that poses significant risk. |
Products and services affecting individual and public health and safety | AI that is intended to be used as a safety component of a product, or is itself a safety product or something that impacts on individual and public health and safety. This includes AI-enabled medical devices, food products and other goods and services. |
Law enforcement | AI systems used in aspects of law enforcement, including profiling of individuals, assessing offender recidivism risk, polygraph-style technologies or evaluating evidence. |
Administration of justice and democratic processes | AI systems used for making a determination about an individual in a court or administrative tribunal, such as systems used for evaluating facts, evidence and submission to proceedings. With regards to democratic processes, may include any system which can influence the voting behaviour of individuals or the outcome of an election or democratic process. |
The new rules will be most relevant for organisations that operate in highly regulated sectors such as healthcare, financial services, law enforcement and government services.
However, businesses in other sectors may also have obligations if they use automation technologies as part of general organisational functions such as:
- recruitment (eg software that places targeted job advertisements, analyses and filters job applications, and evaluates candidates); and
- physical and digital security (eg facial recognition, video surveillance analysis and online fraud detection).
The automation footprint of organisations will also naturally expand across both private and public sectors as the capabilities and uptake of artificial intelligence technologies increases.
There are some areas where the coverage of the new rules is potentially unclear and may be impacted by the outcome of the second tranche of privacy reforms that the Government has flagged will follow on from the Bill. For example:
- if the employee records exemption is not repealed, there may be ambiguity as to whether the use of automated decision-making technologies to monitor and evaluate employees will be covered; and
- if the Privacy Act‘s current definition of “personal information” is not amended, there may be ambiguity as to how the rules apply to automated decision-making technologies that utilise categories of data that may fall partially or completely outside of the existing definition of “personal information” (such as metadata, location data and biometric information/biometric templates from which the individual is not reasonably identifiable).
What needs to go into your privacy policy?
If an organisation has automation use cases that meet the materiality threshold, the organisation’s privacy policy will need to be updated to include the following information:
- what kinds of personal information are used in the operation of the relevant computer programs;
- what kinds of decisions are made solely by the operation of the computer programs (ie the decision-making processes that are fully automated); and
- what kinds of decisions are made by human decision-makers but with substantial and direct assistance from the computer program.
The Bill combines these new privacy policy requirements with other reforms that will give the OAIC the ability to issue infringement notices for non-compliant privacy policies (and certain other categories of specified contraventions). These infringement notices may be for up to a maximum penalty of 200 penalty units (currently $62,600) per contravention.
Will privacy collection notices also need to be updated?
The Bill does not make any corresponding amendment to APP 5.2, which sets out the matters that must be notified to individuals at the point of collecting their personal information.
The existing requirements of APP 5.2 may require organisations to address the use of automation and artificial intelligence technologies in some circumstances, such as where the organisation needs to disclose the individual’s personal information to the supplier of the automation technology in order to use that technology.
How to prepare and further reforms on the horizon
If the Bill passes, there will be a 24-month grace period following Royal Assent before these new privacy policy requirements come into force. There are different commencement timeframes for the other reforms in the Bill.
The new privacy policy requirements will apply to any existing automation use cases that are in place when the reforms come into force (that is, regardless of whether the organisation started using the computer program before or after the commencement date, and whether the relevant personal information was acquired before or after the commencement date).
Organisations can prepare for these changes (and other upcoming reforms in AI regulation) by ensuring that their governance and risk-assessment frameworks:
- provide visibility of how automation technologies are being deployed to support (or fully automate) existing workloads and business processes; and
- appropriately account for the unique risks and challenges posed by automation and artificial intelligence, such as algorithmic bias, fairness, transparency and accountability issues.
Procurement and business transformation teams will play an important role in ensuring that new use cases can be identified early. Operational teams and management will need to work closely to determine how an appropriate level of human oversight can be maintained (and human accountability can be assigned) for workloads and businesses processes that are delegated to automated systems.
The Department of Industry, Science and Resources’ Proposals Paper for Introducing mandatory guardrails for AI in high-risk settings may provide a useful starting point for organisations that are looking to adapt and uplift their governance and risk-assessment frameworks to account for the specific challenges posed by AI and automation. See our article on the Proposals Paper here.
Making these investments in governance frameworks will also help organisations to better position themselves for future law reforms and regulatory developments in the automation and artificial intelligence space. As part of its response to the Privacy Act Review Report (see our article here), the Australian Government:
- accepted a recommendation to create a new right for individuals to request information about substantially automated decisions that impact them; and
- accepted in principle a recommendation to make privacy impact assessments mandatory for “high risk” activities.
These reforms could be included as part of a second tranche of reforms to the Privacy Act, or in separate legislation.
This article was written by Matthew Craven, Partner and Tim Lee, Special Counsel.