Behind code doors – can AI be trusted to keep a secret?

18 March 2024

With generative artificial intelligence (AI) tools like ChatGPT taking a form that looks like a private messaging thread, it is easy to slip into a conversational exchange, and treat it like a close confidant. But can telling an AI system something confidential pose an issue?

In recent articles, we have discussed how machine learning processes used to train large language models (LLMs) can have intellectual property implications, as well as intellectual property issues that can arise from the output of LLMs, plus associated privacy considerations.

In this piece, we consider confidentiality issues associated with using AI tools, and whether telling the wrong thing to an AI system can cause real headaches.

At a very basic level, most of the current popular LLM services are offered via the cloud, primarily out of necessity given the high computational demands involved. As a consequence, any use of these tools inherently involves the user sending their queries to a third party, who may be located overseas.

Even leaving aside any technical considerations about machine leaning processes, this involves a large degree of trust as to what that vendor will do with that input, and consideration of how inputting information will affect the user and the information itself.

There is even the possibility that vendors might use user input in order to train future versions of their systems. With generative AI tools known to occasionally regurgitate information verbatim, there is the real possibility that data input by one user may later be shown to another. Many of the current popular LLMs do not engage in ‘live’ machine learning, and instead the models are ‘fixed’ following an initial training period, and therefore are not absorbing information from user input in real time. However, user input can be (and often is) a valuable source of material for training the next iteration of the model.

We will look at what happens if you provide an LLM system with the wrong data, such as your trade secrets, personal information, or privileged legal advice.

Can you provide trade secrets to an AI?

Trade secrets are a nebulous construct that are not specifically defined under Australian law. However, that does not mean that they are not protected.

Australian courts will protect a trade secret provided that:

  1. you can identify specifically what the trade secret is;
  2. the information is confidential;
  3. the entity that the trade secret was disclosed to knew it was confidential; and
  4. there is actual or threatened misuse of the trade secret without your consent.

If inputting trade secrets in the course of using AI, then two key questions arise.

Is the AI, or more specifically the entity that owns or runs the AI, bound to keep the trade secret confidential? And does inputting the trade secret into an AI cause the trade secret to lose its confidential nature, and stop being a trade secret?

Is an AI bound to keep a trade secret confidential?

Whether a party is bound to keep a trade secret confidential depends on if that party knew (or ought to have realised) the trade secret was confidential. In assessing this, a court will assess whether a reasonable person in the position of the recipient would have realised that the information was given in confidence. It is hard to identify if this is the case for the vendor of an AI tool without looking at the specific situation. However, we can examine this using the hypothetical situation asking a question of ChatGPT on the OpenAI website that included a trade secret.

In that scenario, there is nothing in the relationship between the parties that indicates a duty of confidentiality. Instead, the user has just used the generally available version of the software with no changes to any setting or additional protections. Such an act is accordingly unlikely to give rise to any confidentiality obligations on OpenAI.

However, this could be different if you were using a solution like ChatGPT Enterprise. ChatGPT Enterprise is available for businesses and promises “enterprise grade security & privacy”. ChatGPT Enterprise provides increased privacy features, such as encrypting all data, and preventing OpenAI from training ChatGPT on your data. The additional focus on privacy makes it far more likely that a reasonable person would realise that the information was given in confidence and that OpenAI is therefore bound by a duty of confidentiality.

Does disclosing to an AI mean a trade secret is no longer confidential?

If you pass on a trade secret to an AI system vendor, and that vendor is not bound by an obligation of confidentiality, does that rob the information of its confidential nature – and therefore its status as a trade secret?

When considering whether a trade secret is indeed confidential, a court will consider factors including:

  1. the extent to which the information is known outside the business, especially by persons who are not bound by any obligations of confidentiality; and
  2. the extent to which the information is treated as confidential by relevant parties, especially the business itself.

It is possible that the disclosure of a trade secret to an AI would mean that the trade secret is known outside of the business.

While generative AI tools are not intended to simply reproduce their training data verbatim, we have previously discussed how there have been examples of AI doing exactly that. If your input can be used as training data, then this poses a real risk to your trade secrets as it would be hard to argue that the information is not known outside of the business if you provided a trade secret to a vendor whose systems could potentially regurgitate that secret to someone else.

Some vendors are aware of user concerns in this respect. For example, OpenAI advises that:

  • ‘We don’t use content from our business offerings such as ChatGPT Team, ChatGPT Enterprise, and our API Platform to train our models’; but
  • ‘When you use our services for individuals such as ChatGPT or DALL•E, we may use your content to train our models’ but ‘You can opt out of training through our privacy portal by clicking on “do not train on my content,” or to turn off training for your ChatGPT conversations, follow the instructions in our Data Controls FAQ. Once you opt out, new conversations will not be used to train our models.’

This provides some protection for enterprise users, but individuals may not be aware of the need to ‘opt-out’ when using the free version.

However, even without the risk of your input forming part of the training data, it is also possible that disclosing a trade secret to an AI would demonstrate that you are not treating the information as confidential. Courts may be unwilling to enforce protection of information in circumstances where its owner has been flippant about its confidentiality in the past.

Third party obligations

In some circumstances, an organisation will be privy to particular information under a non-disclosure agreement, or a confidentiality provision in a broader agreement.

Inputting that protected information into an AI system (and thereby disclosing that information to the relevant vendor) could potentially cause the organisation to be in breach of its contractual obligations, though in those circumstances it will be heavily dependent on the terms of the underlying contractual provisions.

Can you provide personal information to an AI?

Most businesses in Australia are required to meet the standards set in the Australian Privacy Principles (APPs) set out in the Privacy Act 1988 (Cth) (Privacy Act). There may be circumstances where inputting personal information into an AI system (and thereby disclosing personal information to its vendor) may be a breach of the APPs.

The APPs place limitations on when an organisation can disclose personal information without consent, and impose an additional layer of obligations when disclosing personal information outside of Australia.

Disclosure of personal information

Entering personal information about customers, staff, suppliers or marketing targets into an AI tool has the potential to constitute a ‘disclosure’ of personal information for the purpose of the APPs.

Disclosure occurs whenever another entity is shown or given access to personal information. Providing personal information to an AI is likely to be disclosing personal information unless you are using a private instance of an AI that is stored and operated solely in your internal environment.1 Providing personal information to an enterprise solution with greater security and confidentiality, like ChatGPT Enterprise, could still constitute disclosure of personal information, irrespective of any assurances from the vendor about how it will handle the information it receives, because the personal information is provided to another entity.

Disclosure of personal information is not automatically impermissible under the APPs. Under APP 6, an organisation is generally able to disclose personal information within Australia for the primary purpose for which the information was collected, for a related secondary purpose that the individual would reasonably expect, or for other purposes that the individual has consented to.

This makes it necessary to assess, on a case-by-case basis, the nature of the purposes for which personal information was collected, and the purpose for which it is being entered into the AI tool (and thereby disclosed to the vendor of that tool), before merely entering it into that tool. While there may be scenarios where this is permissible, plainly an organisation will need careful processes in place to ensure that any such disclosure is limited to those instances, and personal information is not disclosed in an uncontrolled manner.

What if the AI is based outside of Australia?

In addition to the requirements of APP 6 discussed above, the APPs place additional restrictions on businesses looking to disclose personal information outside of Australia.

APP 8 sets out the specific requirements associated with disclosure of information outside of Australia. Without going into all of the detail, this typically requires the Australian entity proposing to make such a disclosure to first either:

  • take reasonable steps to ensure that overseas recipients of personal information do not breach the APPs;
  • undertake an analysis of the laws of the foreign country involved, and ultimately come to the conclusion that the overseas recipient of the information is bound by an appropriate privacy law which is able to be enforced by the Australian individual to whom information related; or
  • obtain very specific consent to this disclosure taking place.

In addition, under section 16C of the Privacy Act, the Australian entity making the disclosure can often also be held responsible for any breach of the APPs by the overseas recipient, even if it took reasonable steps to avoid such breach, such as putting contractual protections in place.

Noting that many of the major AI systems are delivered by vendors outside of Australia, this plainly introduces significant complexity if the input being entered includes personal information.

It is also highly unlikely that default contractual terms from major vendors located outside of Australia include express obligations on that vendor to comply with the APPs, making it difficult to confirm reasonable steps to ensure compliance as required by APP 8.1.

Can you provide copies of legal advice to an AI?

Confidential communications between lawyer and client that are brought into existence for the dominant purpose of giving legal advice are privileged. This means that a client is generally not required to disclose privileged communications in court proceedings, or in response to a regulatory notice. However, this privilege can be waived if the client acts in a way that is inconsistent with the privileged communication remaining confidential. For example, this can occur where a client either intentionally (or inadvertently) discloses the substance of a privileged communication to a third party.

There are some circumstances where a disclosure of privileged communications will not amount to a waiver of privilege. For example, if legal advice is shared internally within a company, where disclosure is made to a third party with whom the holder of the privilege shares a sufficient common interest, or where disclosure is made but adequate restrictions to preserve the confidentiality of the communication are put in place (such as an express agreement which sets out the basis for the disclosure and the limitations upon its further use). Ultimately, whether disclosure amounts to a waiver of privilege will depend on all the circumstances surrounding a particular disclosure. An analysis of the factors and circumstances that will usually result in a wavier are beyond the scope of this article.

Broadly though, if a person wishes to maintain privilege over a communication, they must ensure that they act in a way, and handle the communication in a way, that is consistent with maintaining the confidentiality of the privileged communication. Deliberately disclosing that information to an LLM is unlikely to be viewed favourably in that respect.

Other issues

There are a range of other circumstances where there may be laws which limit an ability to disclose information to a third party. While too much to cover in this article, this could include:

  • classified materials;
  • suppression orders; or
  • laws which limit sharing of certain information, such as ‘protected information’ under the Security of Critical Infrastructure Act 2018 (Cth).

There may also be other instances where disclosure of information can undermine a party’s own rights. For example, an invention is only patentable if it is sufficiently new and novel, which can be prejudiced by the inventor’s own publications.

Conclusion

Even though it is tempting to trust AI systems like private confidants, users need to exercise caution when inputting material which is confidential, personal or privileged.

Close consideration of the terms of the relevant vendor, and any user settings available, may help to reduce some of those risks, but there may still be many cases where disclosure of specifics is inadvisable.

‘On device’ or ‘on premises’ implementations of generative AI tools may remove many of these risks by eliminating a third party having access to information, but these are likely to be technically inferior to the models able to be run on vast cloud computing resources.

You can reach out to the team at HWL Ebsworth for specific advice on how to best implement AI to maintain confidentiality and meet privacy obligations.

This article was written by Daniel Kiley, Partner, Caitlin Surman, Special Counsel and Max Soulsby, Solicitor.


1 It should be noted that using an AI stored on your internal environment would still constitute use of the personal information, even if the personal information was not disclosed, which would also need to be assessed against the APPs.

Caitlin Surman

Special Counsel | Adelaide

Subscribe to HWL Ebsworth Publications and Events

HWL Ebsworth regularly publishes articles and newsletters to keep our clients up to date on the latest legal developments and what this means for your business.

To receive these updates via email, please complete the subscription form and indicate which areas of law you would like to receive information on.

Contact us