The use of Artificial Intelligence (AI) is increasing at a swift rate. Not only has AI become a part of everyday life, but it has been heavily relied upon by businesses to increase productivity in the workplace. In fact, studies have shown that approximately 35% of businesses world-wide are using AI1. This is expected to grow significantly in the next few years.
However, many have also wondered about the downfalls of relying upon this technology. One of the contingencies of AI is that in order to improve and evolve its decision-making abilities it requires access to data, including personal data.2 Naturally, the idea that an AI model may have access to personal data raises questions about security, protection and other risks associated with the use of that data.
These concerns have come to the forefront of global news following the release of ChatGPT. In respect of privacy laws, some countries have already taken steps to restrict or outright prohibit the use of ChatGPT.
At this stage, there have not been any legislative or policy changes in Australia since the release of ChatGPT. However, given the measures that other countries have put in place, international privacy laws are likely to have some role to play in guiding Australia’s response to ChatGPT and the effects on privacy and security globally.
What is ChatGPT?
ChatGPT is a generative artificial intelligence model, which means that it collects and processes information for the purpose of interacting with users in a responsive and conversational manner. The dialogue format makes it possible for it to answer follow-up questions, challenge incorrect premises, reject inappropriate requests, and even admit its mistakes.3 Put simply, a user may pose a question to ChatGPT, and ChatGPT will then rely upon information that has been published on the internet and provided by its users to form a response to that question.
Given that ChatGPT was released free of charge to encourage users and gain feedback, it follows that the ease of access to ChatGPT has contributed to its popularity and, consequently, the amount of data that is being fed to the model.
International developments
The release of ChatGPT has had an impact on a global scale, resulting in major developments in the privacy laws of other countries.
Notably, some countries have imposed restrictions on the use of ChatGPT amid the growing concerns about the impact on users. For example, Italy was the first western country to pass legislation which effectively banned the use of ChatGPT. The concerns raised by Italy’s privacy regulator involved the way that ChatGPT processes personal data, stating that it had no legal basis for gathering user data, and its lack of age verification for users.4
Italy has since lifted its ban. Despite this, it is clear that the AI model is still being approached with apprehension, leading certain jurisdictions to make significant reactive changes to their privacy laws.
Impact on Australian businesses
So, how does this impact Australia?
Firstly, many Australian businesses have an established presence in the European Union (EU), which means that EU’s privacy laws may apply to those businesses.
Even in cases where the General Data Protection Regulation (GDPR) would not apply to an Australian business, it’s important that they take note of ChatGPT’s impact globally and the trends in regulatory reform as it is expected similar measures will be introduced in other jurisdictions including Australia.
What are the EU Privacy Laws?
EU Privacy Laws are governed by the GDPR. The GDPR sets out a number of principles in relation to the use of data, which include:
- restrictions on the collection, use, storage, disclosure, dissemination or destruction of personal data, particularly without consent;
- prohibitions on the collection of the personal data of minors without parental consent; and
- disclosure requirements including that the subject of data be notified when their personal data is collected or used, and their data be provided to them upon request.
Given the way in which ChatGPT operates, these principles that could potentially be offended by its processing and publication of information.
Additionally, in response to the unprecedented concerns raised by the arrival of ChatGPT and similar AI models, the EU has proposed legislation to deal specifically with AI, being the Artificial Intelligence Act (EU) (the AI Act).5
The proposed AI Act sets out rules and obligations for AI systems, including transparency obligations and additional rules for AI systems that are classified as ‘high-risk’. The AI Act also sets out a number of “prohibited” AI practices, including the use of subliminal techniques or the targeting and exploitation of vulnerabilities (such as due to age or disability).
Additionally, concerns relating to user protection, data protection, privacy and public safety have led to calls for the EU and national authorities within the EU to launch an investigation into generative AI.6 These investigations will likely inform the proposed AI Act as well as steps taken by regulatory bodies globally.
The relevance to Australia
In comparison to Australia’s current privacy regime, the GDPR is considered to impose broader and stricter obligations on bodies that process the data of EU residents. However, a recent review of Australian privacy laws has led for calls for the Privacy Act 1988 (Cth) to adopt a number of the privacy principles set out in the GDPR.
The Privacy Act Review Report 2022 (the Report) (helpfully summarised here) has foreshadowed amendments to the Privacy Act including broadening the definition of “personal information” to replicate the GDPR definition of personal data.
Additionally, the Report contemplates the need to regulate AI technology through privacy laws, particularly with respect to automated information processing (such as processing by AI models like ChatGPT).7
Should we be concerned?
ChatGPT is based on a language model which requires large amounts of data to function, improve and evolve. Given the volume of data that is being fed to the model and its ease of access to information via the internet, there are inherent risks associated with the processing of that data. This includes the risk of data breaches and cyber-attacks, as well as an increase in disputes relating to the ownership and control of data.
Australia’s privacy laws are yet to deal with generative AI, however the rapidly evolving industry and the increasing international developments foreshadow the gaps that will likely be filled in the Australian regulatory frameworks. Watch this space!
If you have any concerns or questions about how new generative AI technologies may impact your business, please reach out to our team.
This article was written by Simone Basso, Solicitor and Alexandra Beal, Solicitor, and reviewed by Peter Campbell, Partner.
1Nick G, ‘101 Artificial Intelligence Statistics [Updated for 2023]’, TechJury (Online) <https://techjury.net/blog/ai-statistics/#gref>.
2‘Big Data AI’, Qlik (Web Page) <https://www.qlik.com/us/augmented-analytics/big-data-ai>
3‘Introducing ChatGPT’, OpenAI (Web Page) <https://openai.com/blog/chatgpt>
4Ashley Belanger, ‘OpenAI gives in to Italy’s data privacy emands, ending ChatGPT ban’, ArsTechnica (Online, 5 February 2023) <https://arstechnica.com/tech-policy/2023/05/openai-gives-in-to-italys-data-privacy-demands-ending-chatgpt-ban/>.
5Proposal for a Regulation of the European Parliament and of The Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, (Online) <https://artificialintelligenceact.eu/the-act/>.
6The European Consumer organisation, ‘Investigation by EU authorities need into ChatGPT technology (Press Release, 30 March 2023) <https://www.beuc.eu/press-releases/investigation-eu-authorities-needed-chatgpt-technology>.
7Privacy Act Review Report 2022, page 12.