The role of artificial intelligence in tailored healthcare
Healthcare

Algorithms aiding decision-making connected to patients and healthcare processes offer a host of health, social and economic opportunities. But with opportunity comes risk.

Advancements in the field of artificial intelligence (AI), here used in the broadest sense of the term, have started laying the foundation for the application of AI in clinical decision making.

The next generation of data collection and analysis technologies will create new demands on clinicians: they must integrate increasing volumes of health data, with many new sources of input, to be able to make accurate and appropriate decisions for the well-being of their patients. Apart from the patients’ clinical data, these range from local, national and international guidelines on standards of care for specific diseases and interventions, and the ever-growing body of medical research, to numerous clinical trials that are underway globally at any given time. This challenge is compounded by the administrative demands made on clinicians, which has led to a steady decline in face-to-face time between clinicians and patients1. What role does AI have to play in addressing these challenges?

AI and streamlined healthcare

Patients are being represented in more complex clinically relevant ways, from their genetic to their cardiovascular risk profiles. Some molecular profiling methods in particular display immense dimensionality impossible for even the most gifted clinician to analyze without the help of sophisticated algorithms. AI comes into its own in this setting, where algorithms are already being used to develop molecularly-based prediction and prognostication profiles that can inform clinicians’ risk stratification and patient management and treatment tailoring strategies. 

One current area of research aims to develop an algorithm that can suggest the optimal combination, release, schedule and dosage of a set of drugs suitable for particular molecular profiles. This could potentially be delivered through a polypill which contains a combination of several medications.

In the field of surgery, as robot-assisted surgery enters clinical practice, the complex procedures performed by camera-aided mechanical arms could potentially be enhanced by AI through assessment of the problem space and guiding the robots through the optimal sequence for the intervention. 

Health systems also stand to gain greatly from the application of AI. Big medical data in combination with AI will aid the development of risk-based outcome predictions, for example in readmission rates, sepsis and relapse in cohorts of patients at risk for these types of complications2. There are also large gains in efficiency and productivity to be made through AI-powered workflow improvements, including reporting and back-office administration such as coding, billing and scheduling. One example is iQueue which analyzes data on operating room use and uses predictive analytics and machine learning to identify inefficient patterns of use and reallocating operating room time as needed.

Despite the potential of AI in healthcare, the ultimate aim is in fact synergy with humans in healthcare, rather than replacement. In the transitional phase, validation and verification of AI algorithms is of paramount importance, while simultaneously bridging the educational gap for clinicians who may not have a good understanding of AI or readily accept its implementation. By gradually offsetting functions that machines do best and combining these with tasks best suited for humans, this process can create and enhance a hybrid workforce.

The risk of growing data, biased algorithms and unequal opportunity

Data from the next generation of medical sensors, together with the devices and machine learning algorithms that are being developed and applied to deliver value to both patients and healthcare systems, represent two sides of the same coin in the data-driven medical era. As a result, discussions on time of impact, uncertainties, sustainability and governance are inextricably linked for these two broad technology areas.  

The lowest threshold of AI use in healthcare has been in the interpretation of medical images, such as CT scans for the detection of trauma or fractures, pathology slides for the classification of breast cancer subtypes, or retinal scans. A growing number of AI medical algorithms are receiving FDA approval as Class II medical devices3, meaning that these algorithms which carry a moderate to high risk to the patient are nationally approved for use in the US.

Today, we are starting to see fledgling initiatives that harness data streams produced from the sources mentioned, together with the computing ability to influence patient experiences and outcomes, in a range of pilots and proofs-of-concept across the globe. Even these initiatives are challenged by the fact that 80% of healthcare data today is unstructured, making it unreadable for machines, although some argue that AI will in time also overcome this barrier. Towards 2030, implementation of these schemes is expected to reach larger segments of the global population, although for widescale roll-out the challenge of unifying and integrating patient records electronically across platforms will be a limiting factor.

In addition to solving administrative challenges, data quality is essential for successful synergy between AI and human contribution to healthcare. Health data today remains largely unstructured, despite ongoing efforts to address this issue at numerous levels in every modern health system. The quality of health data encompasses multiple aspects, including architecture, modelling, integration and interoperability. All these have knock-on effects when the health dataset is introduced into a decision-making setting, affecting specificity and sensitivity of the algorithms trained on it.  

Incomplete or biased datasets can similarly introduce unwanted bias into the algorithms and analyses. We know, for example, that genomic datasets and genetic knowledge bases today are heavily skewed towards certain populations. A recent study showed that current European population-based databases of breast cancer genetic variants do not sufficiently serve non-European populations in risk assessments4. Additionally, the types of data must also be continuously and critically appraised to ensure that clinically relevant data is being collected and applied correctly.

We may be entering into an era where you’re only as healthy as the quality of your health data. Data quality assurance, therefore, is critical on both social and economic levels.

 AI has the potential to improve quality of care through better workflows and reduced medical errors5; allow for the development of a more sustainable value- or quality-based care; and has the potential to reverse the decline in face-to-face time between clinician and patient, where doctors’ and nurses’ time may be freed up for increased patient interaction, as noted in How next-gen sensors will shape our health. Yet, the extent to which AI is scalable, and the extent to which it will be acceptable to use algorithms for patient care and decision making remains to be seen. The non-transparency of medical algorithms today may limit their clinical uptake, due to clinicians’ scepticism towards their mode of function. Consequently, explainable or transparent AI is one emerging area of research working to counter these concerns6 and models for continuously validating and verifying these algorithms will be a necessity.

Contributors

Main author: Sharmini Alagaratnam

Editor: Tiffany Hildre

  1. Electronic health record logs indicate that physicians split time evenly between seeing patients and desktop medicine. Tai-Seale et al., Health Affairs, 2017.
  2. Scalable and accurate deep learning with electronic health records. Rajkomar et al., npj Digital Medicine, 2018.
  3. https://medicalfuturist.com/fda-approvals-for-algorithms-in-medicine/
  4. Germline variation in BRCA1/2 is highly ethnic-specific, Bhaskaran et al., International Journal of Cancer, 2019.
  5. Artificial intelligence, bias and clinical safety. Challen et al., BMJ Quality and Safety, 2019.
  6. What do we need to build explainable AI systems for the medical domain? Holzinger et al.
Technology Outlook 2030 report cover
Download the Technology Outlook 2030 summary Click here