AI and the Australian Consumer Law: Government considers changes to address risks posed by ‘smart’ products

11 November 2024

With recent advances in artificial intelligence systems, an increasing number of consumer products tout AI features. ‘Dumb’ appliances continue to be replaced by ‘smart’ equivalents, with features like automated decision making, or voice control backed by large language models (LLMs). These systems have a range of potential advantages but come along with new kinds of risk.

Recent leaps in AI have typically involved a technique known as machine learning, where a system is trained on a pre-existing set of example materials and able to extrapolate to other scenarios, but it is always difficult to predict exactly how they will operate. Many AI systems are even ‘nondeterministic’, meaning that the same input can produce varying outputs, and the ‘logic’ employed by AI systems is often very opaque, making it hard for manufacturers to definitively vouch for the way in which their products will function.

AI systems can often be reliant on third party servers, and any faults with those servers could result in issues for use of connected appliances. Worse yet, if vendors turn off those servers, any dependent products can potentially be rendered non-functional.

That is not to say that ‘smart’ appliances are always riskier than their ‘dumb’ equivalents. A smart cooking appliance may be better at controlling its temperature and reduce its risk of overheating, or an appliance able to be used with voice commands might be easier to use for someone with mobility issues. Instead though, the traits of AI systems may give rise to different types of risk.

Given that backdrop, the Federal Government Treasury has released a discussion paper which seeks to further explore the application of the Australian Consumer Law (ACL) in relation to AI-enabled goods and services, including issues such as:

  • how well adapted current ACL mechanisms are to support consumers and businesses to manage potential consumer law risks of AI-enabled goods and services;
  • the mechanisms for allocating manufacturer and supplier liability, and remedies available to consumers of AI-enabled goods and services; and
  • potential regulation reforms to better reflect AI-enabled goods and services within the ACL.

The discussion paper forms part of the Australian Government’s ongoing work to clarify and strengthen existing laws to address AI-related risks while considering if additional AI specific frameworks may be required. The release of the Treasury discussion paper signals that the ACL may soon become a tool to control business use of AI to maintain consumer protections and guarantees.

As we’ve previously discussed, businesses need to be careful with allowing confidential or proprietary information to be used as inputs in public LLMs. We are also anticipating enhanced privacy protections relating to the use of AI-in automated decision making. This article aims to provide an overview of the Treasury discussion paper, with regard given to necessary business considerations when implementing AI-enabled goods and services.

Current ACL provisions regulating AI-enabled goods and services

The discussion paper defines AI-enabled goods and services as those which, when made available to consumers, involve a consumer directly interacting with an AI system (eg a security system which uses facial recognition or an online chatbot to assist with consumer queries). Interestingly, this omits the many products and services in which AI systems sit in the background, producing efficiencies for the supplier of a consumer product, such as a food delivery service which uses AI tools to allocate and prioritise delivery runs.

The discussion paper requests feedback on whether the current ACL adequately regulates AI-enabled goods and services. The technology neutral language of the ACL allows adaptation to protect against AI-enabled goods and services through provisions against misleading or deceptive conduct, unconscionable conduct, unfair contract terms, false or misleading representations, and infringement on consumer guarantees.

The ACL consumer guarantees contain several principles-based provisions which can be extended to AI-enabled goods and services. However, further clarity may be needed for the application of concepts such as ‘fitness for purpose’, ‘acceptable quality’ and ‘due care and skill’ to AI-enabled goods and services.

Similarly, manufacturers continue to be liable for damage caused by safety defects in consumer goods, and this continues to apply whether those products use AI or not. There is also scope for the Government to impose mandatory safety standards that set specific requirements for particular products, which could be used to impose safety standards for AI-enabled consumer goods.

Liability for product safety issues is typically subject to a number of defences set out in the ACL. One such defence applies if the relevant safety defect did not exist at the time the goods were supplied by their actual manufacturer, given that the manufacturer typically has no control over goods once supplied. The paper points out that this may not necessarily be the case where a product continues to receive software updates over its lifespan.

Similarly, a manufacturer will not be liable for a safety defect if the state of scientific or technical knowledge at the time when the goods were supplied by their manufacturer was not such as to enable that safety defect to be discovered. The paper suggests that it may not be appropriate to allow manufacturers to rely on this defence if they know that their AI systems will continue to learn and evolve over time, if the manufacturer has not included appropriate guardrails to prevent this manifesting in unsafe ways. In this respect we note that it is not always the case that AI systems (even machine learning systems) will continue to ‘learn’ over the course of their use – instead, many machine learning systems rely on an intense process of initial training, before the resulting model is fixed in place and deployed.

Stakeholders have raised the following concerns about the adaptability of the ACL to AI-enabled goods and services:

  • AI-enabled goods and services may amplify existing ACL risks for both consumers and businesses due to the decreased ability of AI to be as ‘controllable’ as traditional goods and services. The concerns stem from the unique AI capabilities of adaptability to data over time, autonomous decision-making skills, and opacity of AI output;
  • Contract terms excluding supplier and manufacturer liability for AI-enabled goods may be unfair contract terms. If a contract term limiting or excluding liability of AI-enabled goods and services is deemed unfair, businesses may be subject to heavy financial penalties under new provisions implemented through the Treasury Laws Amendment (More Competition, Better Prices) Bill 2022 (Cth);
  • It may be difficult for consumers to demonstrate that manufacturers are responsible for faults in AI-related goods and services due to the opacity, autonomous behaviour, and complexity of AI systems which may make it more difficult for consumers to meet the causal link requirement between the safety defect and the injury suffered; and
  • AI-enabled goods and services can be a mix of ‘goods’ and ‘services’ which may create issue as the ACL provides separate consumer guarantees and remedies for goods and services. Further clarity may be required to understand where an AI-enabled ‘good’ or ‘service’ falls under the ACL.

Proposed regulations to manage AI-related risk

The discussion paper queries stakeholders as to whether new AI ACL regulation is needed. There are already existing initiatives such as the 2023-2030 Australian Cyber Security Strategy, privacy changes (see above), proposed mandatory guardrails for AI in high-risk settings and a voluntary AI safety standard on how to safely and responsibly use AI. Different options for the introduction of mandatory guardrails have been suggested including a domain-specific approach adapting existing frameworks, a new framework approach implementing framework legislation or a whole-economy approach involving a new cross-economy AI Act.

The Treasury discussion paper’s proposed changes to the ACL include:

  • a new class of digital goods, such as AI smart devices, with unique consumer guarantees that adequately reflect the characteristics of AI;
  • additional guarantees related to cyber-security, interoperability, and the requirement for manufacturers to provide software updates for a reasonable period; and
  • a similar approach to the proposed European ‘presumption of causality’ principle to reduce the burden of AI-caused harm on individuals by shifting the onus to manufacturers to demonstrate no causal link exists.

The paper has a short turnaround for submissions, with input sought by 12 November. However, the issues raised will continue to prove relevant whether or not legislative change results.

HWL Ebsworth’s Intellectual Property and Technology team has experience in advising businesses on the integration and risk management of AI-enabled goods and services. If you have any concerns or are seeking advice on how proposed changes to the ACL may affect your use of AI, please do not hesitate to contact us for further information.

This article was written by Daniel Kiley, Partner, Nikki Macor Heath, Special Counsel and Bellarose Watts, Law Clerk.

Subscribe to HWL Ebsworth Publications and Events

HWL Ebsworth regularly publishes articles and newsletters to keep our clients up to date on the latest legal developments and what this means for your business.

To receive these updates via email, please complete the subscription form and indicate which areas of law you would like to receive information on.

  • This field is hidden when viewing the form
    What type of content would you like to receive from us?

Contact us