In May 2023, Perth’s South Metropolitan Health Service learned that a doctor had used artificial intelligence (AI) tools to generate a patient discharge summary. The Chief Executive of Perth’s South Metropolitan Health Service wrote to all staff in the district, cautioning doctors not to input any patient details into AI technology to create medical notes. The Chief Executive emphasised that AI technology such as ChatGPT does not guarantee patient confidentiality and that we do not yet fully understand the security risk posed by inputting this information into public websites.
Text-based AI programs are called large language models. This is a program that stores the information users enter, to train it to provide more advanced and accurate responses. ChatGPT is the most famous example and many users world-wide are already entering their personal and work information into the program. ChatGPT grew approximately 100 million users only two months after its release in November 2022.1
Global Responses to AI Privacy Risks
The increased prevalence and sophistication of AI tools has sparked investigations into data security and the implementation of regulatory measures around the world.
On 14 June 2023, the European Parliament adopted its negotiating position on the Artificial Intelligence Act, with aims to reach a final agreement with member nations by the end of 2023. Amongst the proposed recommendations include government regulation of AI system training, validation and testing in the AI regulatory Sandbox, for programs handling personal health data.2 This system would involve establishing secure and anonymised datasets for training AI tools intended to handle personal health information in a functionally separate, isolated, and protected environment outside of public use.3
On 21 September 2023, the New Zealand Privacy Commissioner published guidance on AI and how it relates to the New Zealand Information Privacy Principles.4 The guidance reminds users that training AI tools with personal information is likely to breach New Zealand’s information privacy principles and that they must implement measures to track and secure personal information. If a user cannot guarantee how the personal information will be stored or used, the guidance recommends the user does not input any personal information.
On 17 January 2024, the Australian Government published an interim response on Safe and Responsible AI in Australia.5 The response recognises the risk posed by AI models being used in health care organisations and the need for updated privacy legislation to eliminate clinical safety risks. Proposed reforms include an in-principle agreement to expand the requirement for non-government entities to conduct privacy impact assessments for activities with high privacy risks to minimise the risk of harm caused by AI. Part of the Government response includes the National Artificial Intelligence Centre commitment to collating industry guidelines and frameworks to produce a single best-practice voluntary AI risk-based safety framework for responsible adoption of AI in Australian businesses.
Australian Medical Association Position Statement
On 8 August 2023, the Australian Medical Association (AMA) published their position statement on AI in healthcare.6 The AMA propose that like any other new technology involved in patient treatment or diagnosis, government regulation is essential. The position statement calls for legislation embedding regulatory principles that should ensure the following:
- safety and quality of care provided to patients;
- patient data privacy and protection;
- appropriate application of medical ethics;
- equity of access and equity of outcomes through elimination of bias in AI and machine learning;
- transparency in how algorithms used by AI are developed and applied; and
- that the final decision on treatment should always rest with the patient and the medical professional, while at the same time recognising the instances where responsibility will have to be shared between the AI (manufacturers), the medical professionals and service providers (hospitals or medical practices).
AMA has highlighted the privacy of patient data as a key issue with AI technology. Sensitive patient information should only be shared with AI technology with express consent of the patient and only when there is a genuine contribution to improving the health outcomes for a patient.
As AI technologies gain popularity, the risk of sharing personal information is going to increase without tight regulation. Nations around the world are starting to consider the data privacy implications of sharing personal information with AI tools and strategies to protect our information are already emerging.
This article was written by Scott Chapman, Partner and Angela Pale, Senior Associate and assisted by James Condren, Graduate at Law.
1ChatGPT statistics 2023: trends and the future perspectives. Gitnux. 2023 Mar 01. URL: https://blog.gitnux.com/chat-gpt-statistics/ [accessed 2023-10-13]
2EUR-Lex – 52021PC0206 – EN – EUR-Lex (europa.eu) (45).
3EUR-Lex – 52021PC0206 – EN – EUR-Lex (europa.eu) (Article 54).
4Office of the Privacy Commissioner | Artificial Intelligence and the IPPs
5Consultation hub | Supporting responsible AI: discussion paper – Consult hub (industry.gov.au)
6Artificial Intelligence in Healthcare | Australian Medical Association (ama.com.au)