Productivity Commission Releases Interim Report on Harnessing Data and Digital Technology – an AI and Privacy Perspective
Market Insights
Introduction
On 5 August 2025, the Productivity Commission released its Interim Report ‘Harnessing data and digital technology‘. While other government agencies have put forward relatively cautious approaches, such as the Department of Industry, Science and Resources proposal for mandatory AI guardrails, the Commission has emphasised the importance of Australia proactively embracing data and digital technologies.
Among the key issues raised in the Interim Report, three of particular interest are:
- the harnessing of AI’s productivity potential;
- new pathways to expand data access; and
- outcomes-based privacy reform.
In this article, we will examine the regulatory approaches proposed by the Commission and explore how these recommendations fit into the ongoing discussion around regulation of AI in Australia.
To read the Commission’s full Interim Report, click here.
Recommendations from the Interim Report
Harnessing AI’s productivity potential
The Interim Report projects that, over the next decade, AI could contribute approximately $116 billion to Australia’s gross domestic product, translating to a potential increase of 4.3% in labour productivity growth. In the longer term, the Report also speculates that the productivity gain from AI may be even greater if it can be used to accelerate the pace of scientific advancement.
To illustrate the practical impact, the Report highlights examples of AI already driving productivity gains. Notably, the Commonwealth Bank has reported a 30% reduction in customer-reported fraud following the introduction of AI-enabled measures such as transaction alerts.
Looking ahead, the Report advocates for a regulatory framework designed to ensure Australia can capture these economic opportunities while managing associated risks. Specifically, it is proposed that Australian regulatory authorities and government agencies should undertake a comprehensive risk review of existing regulation to identify gaps in relation to AI. Following this review, rather than establishing new AI-specific laws or regulatory bodies, it suggests these gaps should be addressed by updating existing laws. This approach reflects the view that “AI can exacerbate existing risks of harm but does not create wholly new risks where none existed before.”
The approach taken by the Commission in reaching this recommended regulatory methodology is commercially focussed but diverges from the approach taken in other jurisdictions. The European Union, as well as Canada, Japan and California, have all either enacted or proposed AI-specific laws, whereas the Interim Report recommends building on existing frameworks rather than creating a new, stand-alone regime.
The Commission cautions that technology-specific rules should be a last resort, warning that “burdensome regulation” risks “forfeiting AI’s benefits” and could “stifle innovation.” This stance contrasts sharply with the more prescriptive frameworks emerging in the European Union and other jurisdictions, raising questions about whether a permissive approach will provide sufficient safeguards against the risks posed by AI.
The Interim Report advocates a segmented, light touch approach to AI regulation, where changes would be incremental and tailored to individual sectors rather than imposed through broad, AI-specific legislation. By contrast, if Australia were to adopt a stricter regime, dedicated AI laws might be more effective in ensuring consistent rules across industries.
So far, no jurisdiction has implemented the kind of segmented model proposed by the Commission. However, Japan and the United States (at the federal level) have taken more facilitative approaches, either easing existing rules or relying on voluntary principles to guide AI development and use. The Commission’s proposal can therefore be viewed as a middle ground between the EU’s prescriptive framework and the more permissive models in the US and Japan – though how well it strikes that balance may depend on whether Australia’s current regulations are already less stringent than those overseas.
The Commission recommends that Australia does not adopt the mandatory AI guardrails proposed by the Department of Industry, Science and Resources. It considers that risks can be better managed through outcomes-based regulation, and that the proposed guardrails are too blunt, failing to differentiate between categories of high-risk AI uses.
Separately, the Commission recommends that a new fair dealing exception be introduced to permit text and data mining of copyright materials for AI development. This would align Australia more closely with comparable jurisdictions that are moving to facilitate AI innovation, though it also raises questions about the balance between innovation and the rights of copyright holders.
You can read more about these topics in our article on the mandatory guardrails (available here) and on recent case law concerning the application of fair use in the United States (available here).
The Commission’s call for submissions in response to the Interim Report closed on 15 September 2025. It remains to be seen how key Australian stakeholders will respond to the proposed regulatory approach, and whether their feedback will prompt the Commission to adjust its recommendations.
New pathways to expand data access
The Commission sees data as a key driver of growth, arguing that easier access will help people and businesses while also boosting competition and innovation across the economy. Right now, it says Australia isn’t making the most of its data, and stronger rules are needed to open it up, in the same manner as the Consumer Data Right (CDR) expanded access to banking data (you can read more on the CDR here). The Interim Report suggests that new pathways should be tailored to each industry and the type of data involved. For example, more sensitive data (such as health information or bank details) could be subject to greater access restrictions and specific security requirements and basic non-sensitive data could be made widely available with minimal barriers.
The Interim Report proposes that access to basic data be standardised within each sector through specific codes. These codes would set out what data can be shared, the APIs to be used, and the reliability standards that must be met. The codes would be overseen by government agencies, with penalties for non-compliance. This sector-by-sector model, built around prescribed APIs, closely mirrors the principles underpinning the CDR.
The Interim Report explicitly contrasts its proposed model with the EU’s General Data Protection Regulation (GDPR), widely regarded as the global benchmark for privacy law but often criticised for imposing strict limits on how businesses can use and share personal information.
In principle, expanding data access is a reasonable goal. However, allowing open access through APIs may not be the right solution if it risks exposing personal information to misuse. A more balanced approach could focus on sharing de-identified data, though this raises its own challenge—commercial entities may have little incentive to provide such data without compensation.
Outcomes based privacy regulation
The Australian Privacy Principles (APPs), set out in Schedule 1 of the Privacy Act 1988 (Cth) (Act), implement what the Interim Report describes as “a mix of principles and controls-based requirements.” Entities must handle personal information in line with overarching principles, but in certain circumstances are also required to follow specific procedures and actions. As noted above, these obligations are generally less stringent than those imposed under the GDPR.
The Interim Report notes that some organisations consider the Act to be overly burdensome, with obligations that are not always proportionate to an entity’s size or the volume and sensitivity of the personal information it handles.
In response, the Interim Report recommends introducing an “alternative compliance pathway,” which would allow entities to choose between complying with a prescriptive legal framework or a more flexible, outcomes-based model. This would be a globally unique approach and appears, at least in principle, to prioritise commercial flexibility over the protection of personal information and individuals’ privacy rights – though the impact would ultimately depend on how the pathway is designed and implemented.
Regardless of its design, any alternative compliance pathway would seemingly only be relevant where it allows activities which would otherwise be inconsistent with the current requirements of the Act. While, on its face, this would appear the weaken existing privacy protections, the Interim Report suggests that it may actually improve privacy outcomes for individuals by forcing businesses to consider the interests of individuals in ways not mandated by the current framework.
This proposal is even more unexpected in the context of the multi-year effort to reform the Act (discussed here). The Commission has suggested that these ongoing reforms “risk entrenching existing problems” and might “exacerbate the regulatory burden” on regulated entities. The Interim Report notes that, in particular, the proposed ‘right to erasure’ would “impose a high compliance burden on regulated entities, with uncertain privacy benefits for individuals”, and recommends that this not be implemented.
Final thoughts
Recent advances in AI highlight its potential to drive significant productivity gains, but also the safety risks that can arise if it is developed and deployed without proper safeguards. While many government proposals have so far focused on managing these risks, the Interim Report has placed greater emphasis on the economic opportunities, describing “data and digital technologies” as “the modern engines of economic growth.”
Striking the right balance between fostering innovation and protecting individuals will be critical to ensuring Australia can capture the benefits of AI without exposing the community to unnecessary risks.
Next steps
The Commission’s final report, incorporating stakeholder submissions made in September, is due to be released later this year.
HWLE’s Privacy, Data Protection and Cyber Security team has extensive experience advising clients on current and emerging privacy and AI obligations. If you have any questions regarding the recommendations and insights in the Interim Report or this article, or concerns regarding the impact they may have on you, please do not hesitate to contact us.
This article was written by Amber Cerny, Partner, Daniel Kiley, Partner, Lucy Hannah, Special Counsel, and Maximilian Soulsby, Associate.
Subscribe for publications + events
HWLE regularly publishes articles and newsletters to keep our clients up to date on the latest legal developments and what this means for your business. To receive these updates via email, please complete the subscription form and indicate which areas of law you would like to receive information on.
* indicates required fields


