With the rapid innovation and adoption of AI, regulators are turning increasingly to compliance risks arising from the use of AI. This article highlights ASIC’s recent findings on emerging risks associated with AI when used in the financial services and credit context.
ASIC Report 798
On Tuesday, 29 October 2024, ASIC released Report 798 – Beware the gap: Governance arrangements in the face of AI innovation (Report). The Report summarises ASIC’s findings (extracted at the end of this article) following a review of AI use amongst financial service licensees. It also gives an insight into ASIC’s expectation that licensees will develop stronger AI governance frameworks to address the specific issues set out in the Report.
The Report reiterates the need for licensees to consider, in the context of AI use, their compliance arrangements addressing any applicable general licensee obligations, consumer protection provisions and directors’ duties. The relevant obligations and considerations include:
- doing all things necessary to ensure financial services or credit activities are provided in a manner which is efficient, honest and fair;
- having adequate risk management systems;
- having adequate technological and human resources;
- having measures to comply with their obligations, including for any outsourced functions;
- not making false or misleading representations or engaging in unconscionable conduct; and
- company discharging their duties with a reasonable degree of care and diligence.
Practical steps to assist with compliance
Below are some practical steps that financial services entities and credit providers could take now to assist compliance when using AI technologies:
- for AFSL and ACL licensees, review and uplift existing compliance arrangements to ensure compliance with their licensee obligations;
- assess any risks of breach of consumer protection provisions and implement measures to address such risks;
- ensure the organisation’s privacy policies are up-to-date, and consider the guidance by the Office of the Australian Information Commissioner (OAIC) (see below);
- ensure the organisation has appropriate agreements in place with third parties who will access the organisation’s data that specify how the organisation’s data will be used, stored, and secured;
- ensure the organisation’s internal policies regarding the use of its internal and client data, including when using AI, are up-to-date; and
- consider having regard to the Voluntary AI Safety Standards (see below).
Other Guidance Relevant to Credit Providers Financial Services Entities
Consultation on The Australian consumer law
The Treasury is conducting a consultation on whether the Australian Consumer Law¹ should be amended to better protect consumers who use AI and to support the safe and responsible use of AI by businesses. The consultation closes on 12 November 2024.
OAIC guidance on privacy and the use of AI products
The OAIC recently published guidance regarding using AI systems in their business. The guidance outlines the key privacy risks when using AI and the relevant considerations when selecting an appropriate AI product.
Voluntary AI Safety Standard
The Department of Industry, Science and Resources recently released the Voluntary AI Safety Standard. This voluntary standard gives practical guidance to all Australian organisations on how to safely and responsibly use and innovate with AI. The standard consists of the following 10 voluntary guardrails:
- Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
- Establish and implement a risk management process to identify and mitigate risks.
- Protect AI systems, and implement data governance measures to manage data quality and provenance.
- Test AI models and systems to evaluate model performance and monitor the system once deployed.
- Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle.
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
- Establish processes for people impacted by AI systems to challenge use or outcomes.
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
- Keep and maintain records to allow third parties to assess compliance with guardrails.
- Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.
Key Findings of the ASIC Report 798
Finding 1 | The extent to which licensees used AI varied significantly. Some licensees had been using forms of AI for several years and others were early in their journey. Overall, adoption of AI is accelerating rapidly. |
Finding 2 | While most current use cases used long-established, well-understood techniques, there is a shift towards more complex and opaque techniques. The adoption of generative AI, in particular, is increasing exponentially. This can present new challenges for risk management. |
Finding 3 | Existing AI deployment strategies were mostly cautious, including for generative AI. AI augmented human decisions or increased efficiency; generally, AI did not make autonomous decisions. Most use cases did not directly interact with consumers. |
Finding 4 | Not all licensees had adequate arrangements in place for managing AI risks. |
Finding 5 | Some licensees assessed risks through the lens of the business rather than the consumer. We found some gaps in how licensees assessed risks, particularly risks to consumers that are specific to the use of AI, such as algorithmic bias. |
Finding 6 | AI governance arrangements varied widely. We saw weaknesses that create the potential for gaps as AI use accelerates. |
Finding 7 | The maturity of governance and risk management did not always align with the nature and scale of licensees’ AI use – in some cases, governance and risk management lagged the adoption of AI, creating the greatest risk of consumer harm. |
Finding 8 | Many licensees relied heavily on third parties for their AI models, but not all had appropriate governance arrangements in place to manage the associated risks. |
If you have any queries or need any assistance in relation to the use of AI, please contact a member of our team.
This article was written by Mizu Ardra, Partner, Iain McLaren, Special Counsel, Chenjie Ma, Senior Associate and Jordan Donaldson, Solicitor
¹Schedule 2 of the Competition and Consumer Act 2010 (Cth).