A new AI-driven era for healthcare services and therapeutic goods
Artificial intelligence (AI) is transforming the healthcare industry.
The rise of both AI-driven healthcare services and therapeutic goods is not only a symptom of technological advancement, but also a fundamental shift in how healthcare is delivered to patients.
For instance:
- medical devices with AI-based, predictive analytics can be used to analyse vast amounts of real-time patient data, identifying patterns and predicting potential health risks before these become critical;
- AI-based algorithms can analyse diagnostic images and pathology results, enabling healthcare practitioners to make timely and accurate diagnoses of health conditions; and
- AI algorithms can analyse genomic data to identify ‘personalised’ treatment options for specific patients, reducing the need for a ‘trial and error’ approach whilst also improving outcomes.
AI-based innovations might soon extend even further. For instance, it is not difficult to imagine that AI could also be used in the development of new medicines and to aid healthcare practitioners in carrying out complex surgeries on patients with greater precision.
Overview of the regulatory pathway and where we are at
Accompanying these benefits, regulatory challenges loom over both developers of, and healthcare practitioners who use, AI solutions.
The regulation of AI in healthcare is still in its infancy and regulators are largely playing catch up. This means that there are currently no clear, established guidelines as to how specific regulatory challenges posed by AI solutions might be managed. For now, the onus lies largely on developers and healthcare practitioners to justify why and how AI solutions can be used in the delivery of healthcare services.
The crux of healthcare regulation is to ensure the safety and efficacy of healthcare services and therapeutic goods. Guidance issued by the Therapeutic Goods Administration (TGA) makes clear that AI solutions are no exception and will often be regulated by therapeutic goods legislation if a solution is intended to be used for a medical purpose (eg diagnosis, treatment of a health condition).1 The guidance also states that any such AI-based devices must operate in a way that is both transparent and reliable.2
As noted in our earlier article, the TGA has already taken steps to improve the regulation of software-based medical devices. This included updated regulation which clarified the applicable classification rules and new regulatory requirements for software-based medical devices. The TGA specifically required both new applicants and existing sponsors to review and if necessary, take steps to ensure compliance with the updated regulations. A similar approach is likely to be taken with regulatory reforms about AI-based medical devices to ensure that both new and existing devices align with the new requirements.
However, this might be easier said than done.
Shining light on the black box: Developers and AI challenges
Traditional software follows a clear set of rules and algorithms, which can be readily inspected and validated. On the other hand, AI often operates as a ‘black box’, making it difficult to identify exactly how specific decisions are made. This makes it difficult for developers to trace how certain decisions are made and thereby justify the solution’s efficacy and reliability for therapeutic purposes. The TGA seems to already be on the path of restricting these kinds of AI systems, with new guidance specifically stating that ‘black box’ approaches will generally not be accepted in applications for registration of AI-based medical devices without further evidence as to the safety and efficacy of the relevant device.3
One way that this issue might be managed for developers is by ensuring that comprehensive documentation exists as to the development process for the relevant AI model used including its architecture, training data used and the rationale for decisions made during development of the model.4 Established techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) could be used to guide the uniform interpretation of AI decision-making processes in explaining the basis for making use of certain AI-based decisions.
Another challenge is that the accuracy and reliability of AI solutions often depend heavily on the quality of the training data used in initial development. Poor-quality or biased data can lead to unreliable and/or harmful outcomes for patients particularly in novel or more complex cases. To manage this issue, developers should ensure that AI solutions operate in accordance with stringent data quality protocols. That is, training datasets should be representative, diverse, and free from biases.5 Rigorous validation and testing procedures should also be implemented to ensure that AI models can respond to new, unseen data.
Whilst AI solutions are also often heralded for continuous learning capabilities, this makes outcomes less predictable and transparent. Robust change management processes should be put in place to continuously monitor, document, and review significant updates or changes made to the continuous learning capabilities. Regular performance evaluations and re-certifications of AI solutions might also be necessary to assure the TGA and users that safety and efficacy will continue to be a focus point after the initial roll-out of AI-based medical devices.
Human intelligence: Healthcare providers and AI challenges
Healthcare practitioners and other providers (eg hospitals) who use AI-based medical devices will likely face similar challenges in justifying the use of these devices in their roles.
AI-based ‘black box’ models, for example, will also make it difficult for healthcare practitioners to trace back how certain decisions are made and therefore establish why these should be relied on in the provision of patient care. Indeed, establishing a robust and acceptable rationale for clinical decisions will often be central to medical negligence claims instituted against a healthcare practitioner and/or provider and even professional disciplinary matters.
To this end, healthcare practitioners and providers will largely follow the lead of developers in interpreting the decision-making processes of an AI-based device. However, providers should maintain their own independent understanding as to how an AI-based medical device might interpret patient data, deal with common variables and make decisions that might be used in clinical activities (eg decisions as to whether to screen a patient for a particular illness, or discharge a patient from hospital care). Healthcare practitioners and providers should therefore familiarise themselves with any user manuals and other operational guidance issued by a developer of a particular medical device. This will enable healthcare practitioners and providers to gain an understanding as to how AI-based medical devices will operate (including any potential shortfalls), and to also verify the basis for that understanding, should they be required to do so. Established techniques like LIME and SHAP could also be used by healthcare practitioners and providers to aid their interpretation of, and further justify their reliance on AI-based decisions.
The establishment of clear protocols for the regular evaluation of an AI-based medical device will also be necessary to enable a healthcare practitioner and/or provider to effectively manage the challenges arising out of continuous learning capabilities. Such protocols should set out specific timelines and procedures for evaluating the performance, safety, and efficacy of AI algorithms and related functionalities within the device. There should also be additional measures in place to facilitate seamless communication between developers and the healthcare practitioner/provider to ensure that any emerging concerns or updates can be dealt with promptly. This kind of approach will help to ensure that AI-based medical devices remain up-to-date, reliable, and aligned with evolving clinical standards and patient needs.
More generally, healthcare practitioners and providers should also refrain from establishing protocols or procedures where the clinical independence of a healthcare practitioner is undermined by an AI solution in a way that obviates the need for the independent exercise of clinical judgment by the practitioner.6
How can HWL Ebsworth help?
HWL Ebsworth’s Intellectual Property and Technology team has extensive experience in advising businesses on intellectual property, software issues and therapeutic goods. If you are concerned about your development or use of new AI technology in your healthcare business, please do not hesitate contact us for further information on how we can assist you.
This article was written by Luke Dale, Partner, Nikki Macor Heath, Special Counsel, and Elham Bolbol, Solicitor.
1See Artificial Intelligence (AI) and medical device software | Therapeutic Goods Administration (TGA).
2See Artificial Intelligence (AI) and medical device software | Therapeutic Goods Administration (TGA).
3See Artificial Intelligence (AI) and medical device software | Therapeutic Goods Administration (TGA).
4See generally WHO outlines considerations for regulation of artificial intelligence for health.
5See para 3.5 Artificial Intelligence in Healthcare – AMA.
6See para 2.23 Artificial Intelligence in Healthcare – AMA.