Legal issues raised by machine learning systems

15 August 2019

A version of this article first appeared in the Law Society of South Australia Bulletin.

Artificial intelligence is a broad concept, used to describe any techniques whereby machines are imparted with some sort of human intelligence. Many recent developments in AI have focussed on a particular field known as ‘machine learning’, where systems evolve as they learn from examples or experience, giving rise to some interesting legal questions.

One particular machine learning technique is a system known as a ‘neural network’. These systems seek to mimic, in a very rudimentary way, the way human brains make decisions, with software approximations of neurons and synapses. A neural network is trained by providing sets of known examples, with the expected output fed back into the system to tweak the interactions of the artificial neurons.

A neural network might be trained to identify cats, for example, by suppling it with hundreds of categorised animal photographs. Unlike some other approaches to classification, which might involve attempts to specifically identify features like whiskers and ears, the network will have a fuzzier approach to its understanding. In much the same way as a human will simply recognise a cat as such, rather than running through a checklist of essential features of a cat, the neural network will form its view without being able to provide coherent reasons.

Even pocket-sized devices now rely upon these machine learning techniques for fundamental functionality, such as recognising faces for biometric authentication.

The University of Adelaide’s newly established Australian Institute for Machine Learning has been applying machine learning to pathology samples. Its system takes images of pathological culture plates used for screening, and analyses and interprets microbial growth in conjunction with patient data to formulate a diagnosis. The Institute hope that the system will save time, cost and lives, particularly in rural or under-resourced hospitals, where traditional pathology labs are less readily available.

While machine learning techniques such as neural networks have proven to be effective and useful in a range of different scenarios, some of their characteristics can lead to interesting legal consequences. In particular:

  • A machine learning system will be reliant on data supplied in order to learn and develop. As a result, inadequacies, flaws or biases in that data may be learnt by the system, and become manifest in its functionality;
  • When a machine learning system does make a decision, it is typically difficult to ascertain how or why that decision was made; and
  • Some machine learning systems will continue to evolve over time, which may result in very different outcomes. Others may be trained on an initial set of data, and then locked in place, potentially lacking in flexibility as a result.

This potential for unpredictability, opacity and fluidity can run up against areas of the law focussed on the foreseeable or transparent.

Under the Civil Liability Act 1936 (SA), for example, a person will not be found negligent in failing to take precautions against a risk of harm unless the risk was foreseeable.1 In circumstances where the behaviour of a software system can change over time, it may be difficult to foresee precisely what risk may be involved as a result. However, although it may be that specific risks are not necessarily foreseeable, there is an obvious inherent risk of some malfunction associated with machine learning systems. In many applications this may not have major consequences, but where a machine learning system controls significant physical elements (for example, in autonomous machinery) or is to be relied upon for important decisions (for example, medical diagnosis or other professional advice, which may be subject to a higher standard at law), the potential for injury or loss could be significant.

The Civil Liability Act does provide that no person will be held liable for harm suffered as a result of the materialisation of an inherent risk, but only where such risk cannot be avoided by the exercise of reasonable care and skill, and without excluding a duty to warn of risk. This may help developers of machine learning systems avoid being potential liable for some issues, but only to the extent that they may have been able to prevent against such risk by taking reasonable steps such as including ‘fail safe’ checks on the output of their software.

There is also difficulty in determining which party is responsible for the conduct of machine learning systems. Arguably a range of different parties could be in a position to take steps to prevent against foreseeable risks arising from machine learning systems, including:

  • The developers of those systems;
  • The manufacturers of products that incorporate those systems;
  • The users adopting those systems; and
  • The persons training those systems.

In addition, where consumer products rely upon continual machine learning for their functionality -for example, to implement ‘smart’ features in household appliances – there could possibly be issues in meeting the standards required by the consumer guarantees set out in the Australian Consumer Law (ACL). One such guarantee requires that consumer goods are of acceptable quality, including by being fit for purpose, free from defects, safe, and durable.2 Relevant products that continue to use machine learning to adapt over time have the potential to vary in these respects, or even become defective or unsafe at a later stage. The time at which acceptable quality is determined is when the goods are supplied to the customer,3 but a product prone to fault might not be deemed sufficiently durable. However, determinations as to acceptable quality must also take into account any statements and representations by the manufacturer, and so developers may need to ensure that they accurately convey the limitations of their machine learning functionality.

In other situations, where a product has been trained on a set of data and its functionality locked in place, its ability to deal with circumstances outside the scope of its initial training could be limited. Manufacturers will need to ensure that the data used to train systems such as these is sufficiently broad and varied that those products will stand up to real world usage, and that they continue to function in a safe manner where they encounter the unexpected.

Some goods that may incorporate machine learning elements are also subject to licensing or certification schemes in other contexts. For example, there are already medical devices that benefit from machine learning techniques, which would need to be registered on the Australian Register of Therapeutic Goods (ARTG).

Under the Therapeutic Goods Act 1989 (Cth), medical devices are classified and regulated in accordance with their potential to cause harm. Those with the lowest risk of causing harm are deemed ‘Class I’ and do not require third party oversight prior to inclusion on the ARTG. As the regulations only account for possible harm caused by physical interactions, all software as medical devices are currently Class I devices and require little oversight. In a February 2019 consultation paper, the Therapeutic Goods Administration noted concern that software that incorporates machine learning capabilities is inadequately classified, and that software capable of ‘learning’ and changing over time may need to be subject to ongoing performance monitoring.4

Even where machine learning systems are functioning without overt failures, and accurately following their training, they can reflect flaws in what they have been taught.

Last year Reuters reported that Amazon had abandoned efforts to have machine learning technology rank job candidates after its system was found to have begun discriminating against females. The system was trained against Amazon’s past job applications and the resulting hires, and reportedly began to overtly reflect the historically male skew in the industry.5

Organisations relying upon machine learning systems in order to make decisions about individuals will need to be vigilant to ensure that they do not breach any anti-discriminatory legislation, such as the Equal Opportunity Act 1984 (SA), Racial Discrimination Act 1975 (Cth), Sex Discrimination Act 1984 (Cth), Age Discrimination Act 2004 (Cth), and relevant provisions of the Fair Work Act 2009 (Cth). Because systems based on technologies like neural networks do not readily provide reasons for their decisions, it may be difficult to eliminate the possibility that prohibited matters have been taken into account.

When machine learning is applied in a public decision making context, the lack of transparency becomes even more acute, given obligations to afford procedural fairness.

Perhaps one of the highest profile examples has been in the United States, where the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm is being used by a number of US Courts to provide judges with a prediction of the likelihood of criminal reoffending for use in sentencing and probation decisions. Third party research has alleged that the algorithm is more likely to incorrectly classify African Americans as high risk repeat offenders of violent crimes than Caucasians.6 These findings were denied by COMPAS’s developer, but the software is proprietary and not readily able to be examined. Even if this were not the case, the system may not be capable of providing logical reasoning for its decisions.

Businesses training and deploying machine learning systems will also need to be aware of their obligations under the Privacy Act 1988 (Cth). Where machine learning systems make assessments about individuals, it will be tempting to acquire as much personal information as possible about a broad range of people, and use this as training data. However, any such collection and use of personal information must first be considered against the obligations of the Australian Privacy Principles (APPs). Even if this use is permissible, the quality of that personal information will also be relevant, as a developer will also be required by APP 10.2 to take reasonable steps to ensure that the information it is using for that purpose is ‘accurate, up to date, complete and relevant‘.

In using a machine learning system that makes assessments about individuals, an organisation will also potentially be generating opinions about those people, which would fall within the Privacy Act definition of ‘personal information’, even if those opinions do not ultimately prove to be true. This has a number of consequences, including, for example, that individuals will be entitled to request access to those opinions under APP 12.

Notwithstanding these potential issues, the frequently impressive results of modern machine learning techniques will continue to see them increasingly used, even as we wait for the law to catch up. In the interim, those developing and deploying these systems should ensure that they are clear about their limitations, that care is taken in the manner in which they are trained, and that the output of these systems is subject to manual review or other robust protections against anomalous results.

This article was written by Luke Dale, Partner, Daniel Kiley, Special Counsel and Stephanie Leong, Law Graduate.

Luke Dale

P: +61 8 8205 0580

E: lcdale@hwle.com.au

Daniel Kiley

P: +61 8 8205 0567

E: dkiley@hwle.com.au

1 Civil Liability Act 1936 (SA) s 32(1).
2 ACL s 54.
3 Medtel Pty Ltd v Courtney [2003] FCAFC 151.
4 Therapeutic Goods Administration, ‘Consultation: Regulation of software, including Software as a Medical Device (SaMD)’ (Australian Government, Department of Health, February 2019) 5.
5 Jeffery Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias against women’, Reuters (online, 10 October 2018) https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
6 Jeff Larson et al, ‘How We Analyzed the COMPAS Recidivism Algorithm’, ProPublica (23 May 2016) https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.

Subscribe to HWL Ebsworth Publications and Events

HWL Ebsworth regularly publishes articles and newsletters to keep our clients up to date on the latest legal developments and what this means for your business.

To receive these updates via email, please complete the subscription form and indicate which areas of law you would like to receive information on.

Contact us