4 steps to manage risk and get ROI on your AI
Market Insights
Generative AI tools are rapidly reshaping the way organisations operate. Employees are already using platforms like ChatGPT, Copilot, Claude and other SaaS-based AI tools to speed up writing, research, analysis and customer interactions – often well before organisational policy, governance or procurement controls have caught up.
The opportunity is enormous, but so are the risks. Used well, AI can drive productivity, accuracy and process improvement. Used poorly, it creates regulatory risk, reputational exposure and operational inconsistency – particularly when it comes to Privacy Act 1988 (Cth) (Privacy Act) and Australian Privacy Principles (APP) compliance.
The organisations that will see genuine return on investment are not those experimenting at random. They are the ones putting structure around AI – understanding where it can help, creating guardrails for its use, and ensuring data and systems are sound.
A practical approach involves four key stages:
1. Map where AI will meaningfully improve business processes
The starting point is not the selection of tools. It is understanding where AI could create real efficiency or uplift. This involves mapping internal processes, identifying repetitive or resource-heavy tasks, and determining which pain points would benefit from automation or AI-assistance. The CSIRO’s AI Investment Decision Checklist highlights this approach: strategy first, technology second.
This is principally an operational exercise rather than a legal one. It requires understanding workflows, data flows, user needs and organisational priorities. Without this stage, AI adoption risks becoming fragmented – a collection of uncoordinated experiments rather than a structured productivity driver.
2. Strengthen data quality before relying on AI
AI is only as good as the data behind it. If underlying information is duplicated, outdated or inaccurate, AI will amplify those issues. Before scaling AI across the organisation, there is often work to do on core data.
This can involve cleaning and de-duplicating records, reviewing retention practices, and ensuring data is up to date and reliable. It also aligns with legal obligations under APP 10 (data quality) and APP 11 (security and retention). If flawed or outdated data is used to train or feed AI tools, businesses risk making decisions on incorrect data as well as compliance breaches.
Good data governance gives AI something solid to work with. Poor data simply produces faster mistakes.
3. Establish an AI governance and fair use framework
Once there is clarity about how AI might be useful and data quality is confirmed, the next step is to ensure AI can be used safely. At present, many organisations are relying on generative AI tools without any policy in place, meaning employees may be inputting personal, commercial or confidential information into public platforms without realising the legal or business consequences. The unregulated use of AI in a business raises a number of risks, including:
- Misuse or loss of control of commercially sensitive or client information;
- Potential breaches of the Privacy Act 1988 (Cth) – especially APP 3 (collection), APP 6 (use and disclosure), APP 8 (sharing data overseas) and APP 11 (security);
- Inaccurate outputs being relied on in business decisions;
- Reputational harm if a client learns that their data was used in a public AI environment; and
- Shadow IT: unapproved tools being used without oversight or security assessment.
An AI governance and fair use policy should set out clear roles and responsibilities for AI decision making, establish boundaries on how and when AI can be used, including prohibiting the input of personal, commercially sensitive and client information into public platforms. It should draw on the OAIC’s guidance on the use of commercially available AI products and automated decision-making, as well as the AI ethics guidance from the Department of Industry Science and Resources, which emphasise transparency, accountability and human oversight.
A well-designed policy should also be practical: written in plain language, embedded into induction and training processes, and supported by reasonable monitoring and enforcement mechanisms.
Any monitoring of employee use of IT assets and software, including AI, must also sit within the parameters of applicable workplace surveillance laws, which vary across states and territories and impose specific notice requirements.
The purpose of this framework is not to restrict innovation, but to enable it confidently. Employees can adopt helpful tools without risking compliance breaches or data leakage, and leadership can authorise AI use with clear parameters.
4. Implement procurement and oversight of AI
The fourth step is ensuring AI tools themselves are deployed in a controlled, secure and cost-effective way. Many organisations are discovering that multiple teams or employees have independently purchased AI products, each with separate licences, limited oversight and inconsistent security practices. This increases spend, duplicates effort and creates unmanaged data, privacy and cyber security risks.
A structured procurement and oversight process can address this problem by:
- Requiring tools to undergo a security and privacy assessment prior to purchase;
- Ensuring contract terms address data storage, ownership, deletion, confidentiality and export of data overseas;
- Centralising licence management so that tools are used consistently and efficiently; and
- Considering more secure configurations, such as internal GPT environments or closed enterprise models that keep data within the organisation.
This work can be supported by involving legal and IT early on in the procurement process and embedding good governance at the start of your AI journey.
What success looks like
The organisations that will see the strongest ROI from AI are those that treat it as both an innovation opportunity and a governance project. They do not block tools entirely or allow uncontrolled experimentation; instead, they build guardrails that allow AI to be used safely, effectively and at scale.
When these elements are in place, AI genuinely valuable. Teams know how and when it can be used, risks are managed. procurement is streamlined and AI adoption aligns with real business goals rather than hype.
If your organisation is at the point where employees are experimenting with AI, or where leadership is considering a broader rollout of a specific tool, now is the time to put structure around it. A deliberate, risk-aware approach does not slow innovation – it makes it sustainable.
This article was written by Amber Cerny, Partner and Lucy Hannah, Special Counsel.
Subscribe for publications + events
HWLE regularly publishes articles and newsletters to keep our clients up to date on the latest legal developments and what this means for your business. To receive these updates via email, please complete the subscription form and indicate which areas of law you would like to receive information on.
* indicates required fields

