How do I know if my AI algorithm idea is feasible in a healthcare setting?
Key aspects that must be considered for data-driven medicine tools
As a medical practitioner and entrepreneur, this is a question that I am quite frequently asked: How do I know if my artificial intelligence (AI) algorithm is feasible for healthcare?
With so few examples of successful deployments in current clinical practice, it’s not so easy to pinpoint the most critical factors…but here’s my answer abridged: While there are many aspects that an algorithm developer must consider, there are a few that must be addressed in order to assess the feasibility of a solution.
Critical factors to assess feasibility in AI solutions:
Data availability and quality
Maintenance over time
This article offers an overview of these factors to help innovators start working through the potential development of an AI solution in healthcare. While the factors listed above are indeed the most important, keep in mind that other aspects could also be relevant depending on the use case and the specific healthcare setting.
DATA AVAILABILITY AND QUALITY
It’s been said…and said time and again. Data is the most pivotal factor because it is the foundation for ML/AI technologies and data-driven medicine, and when the stakes are literally a matter of life and death, both the availability and quality of data must be ensured.
Availability has to do with data and information being accessible when and where it is needed by those authorized to use it (i.e., doctors, nurses, staff).
Quality has to do with accuracy, validity, consistency, and completeness, as well as relevancy (i.e., is it up to date?).
As a simple example, a patient comes in for an appointment and the provider pulls up their electronic health record (EHR). When looking at the patient’s record, is their identifying information documented from their first visit? Is their birth date listed the same on all forms? Is a medical history included, as well as past visits with corresponding examination and lab results? Are standardized diagnosis codes used? Has the information been updated as necessary?
To learn more about utilizing real-world data, check out the insights piece called "Is your real-world evidence good enough?"
This factor is critical for any digital healthcare innovation project. Do you have the necessary resources and equipment for deployments and for improving or addressing data quality? This could involve money, time, staff, training, and retraining.
When scrutinizing all the possible AI outcomes, be sure to predict the costs of each outcome. One example of predicting time costs, how long does your solution take to act on an outcome. This aspect is especially important considering that time plays an important role in many serious health conditions.
It’s no surprise that the biggest deployment challenge has to do with data. Due to traditional methods of data collection (pre-digital) and limited resources, healthcare facilities struggle to meet quality standards. When assessing feasibility, innovators must examine how deployment works (or doesn’t work) when their AI solution encounters weaker data. Additionally, data privacy and security measures must be taken due to the alarming and increasing trend of cyberattacks in healthcare.
Another deployment issue, many innovators neglect to consider the full implications of the work capacity (the algorithm’s ability to respond to a prediction) before deployment. The best way to understand these implications is to remain focused on the Outcome-Action Pairing Framework, or how the solution acts on or mitigates an output (click here to read more about this key term). Inherent biases need to be identified during testing phases, interoperability within a network or hospital should be ensured, and solutions need the flexibility to adapt to shifting patient populations.
For a solution to work in clinical practice, the approval of not only regulatory bodies, but also the clinicians themselves are pivotal. From the regulatory point of view, the algorithm or SaMD (software as a medical device) requires clinical evaluation, which involves the following (IMDRF):
Valid clinical association: Is there a valid association between your algorithm output and the targeted clinical condition?
Analytical validation: Does your solution correctly process input data to generate accurate, reliable, and precise output data?
Clinical validation: Does the use of the algorithm’s accurate, reliable, and precise output data achieve its intended purpose in the target population?
For full details or regulatory guidance review the entire IMDRF document.
Clinical investigations can be extensive, time-consuming, and expensive, so take the regulatory guidelines into account from the earliest stages of development.
From the clinician's perspective, in addition to the clinical evidence, education resources and AI learning tools need to be provided. Metrics of AI algorithms may not demonstrate proof of value for patient care in a straightforward manner, thus innovators may need to interpret this information in understandable language so that clinicians can trust its application.
MAINTENANCE OVER TIME
Besides addressing weaknesses, biases, and usability after deployment, developers will need to give some attention to the eventual shifting and changing of patient populations. While an algorithm may be standardized, the healthcare setting is an ever-changing environment. While it’s difficult to predict all future healthcare trends, AI adaptability, the evolving operational aspects of clinical practice, and the need for periodic staff retraining require acknowledgment.
Needless to say, successfully translating AI research into data-driven medicine for clinical practice is an enormous challenge. However, it’s also an exciting opportunity with the potential for enormous impact that improves care and saves lives. My advice: Evaluate these five feasibility factors realistically. The data needs to be sound, resources need to be present, deployment and maintenance must be well-planned, and solutions must jump through regulatory hoops and meet clinicians’ approval.