BY PRAVIK SOLANKI AND HANNAH LI
The following piece received 2nd Place in the Writing (Clinical) section of the Auricle Annual Writing Competition 2020.
Three years ago, something incredible happened.
Three years ago, a robot passed China’s medical licensing exam – comprised mostly of patient cases – with flying colours, armed with information absorbed from dozens of textbooks, 400,000 journal articles and 2 million medical records.1 This seems to refute our conviction that the hard-earned fruit of clinical decision-making can only be attained through years of experience. If robots can learn to outperform us, should we be worried?
The emerging field of Artificial Intelligence (AI) is transforming every industry around the world.2 AI stems from the ‘information revolution’ of recent decades – harbouring a world where progress increasingly depends on the efficiency of information processing.3 Granted, this is the next broad chapter in human history after the industrial revolution,4 but what if we simply want to preserve our current way of living and working? Sadly, we can no longer ‘hit the breaks’ on our new direction – the global economy now depends on this growth, and besides, nobody knows where the breaks are anymore.5 My friend, we are in this for good.
Sci-fi movies notwithstanding, the promise of AI in healthcare seems to lie not in humanoid robots (which remain poorly developed), but rather in machine learning. Machine learning involves the use of automated pattern-seeking algorithms to ‘understand’ datasets through an iterative process.6 Much like a child acquiring human language, the algorithm adapts and improves with experience.7
Machine learning is already being used for the proactive surveillance of health in ways that helpfully aid clinical medicine. This includes the OUTBREAK project, an Australian initiative employing machine learning on health, environmental and agricultural datasets to predict antimicrobial resistance before it hits our healthcare system.8 Another Australian initiative seeks to create the world’s first suicide monitoring system, using machine learning on national ambulance data to identify population patterns and hotspots.9 Projects like these attract millions of dollars from funding bodies and herald great promise in supplementing the role of clinicians, who clearly cannot be everywhere to prevent every health emergency imaginable.
In the clinical terrain, the increasing accuracy of machine learning in diagnostic tasks may make some clinicians nervous. One machine learning algorithm, trained on close to 130,000 images of skin lesions, could detect dermatological malignancies as well as 21 dermatologists.10 Another, trained on a dataset of over 34,000 chest x-rays, could detect malignant pulmonary nodules with an accuracy exceeding 17 out of 18 radiologists in the study.11 Machine learning models are even being developed for low-resource settings such as rural India, where a lack of ophthalmologists currently makes diabetic retinopathy notoriously difficult to screen for.12 The processing capacity of these algorithms is enormous – a sufficiently trained image-based machine learning algorithm could process a gargantuan 3000 images every second running off a $1000 graphics card.13 There’s just no way we could keep up, and we may even be outperformed.
Another CSIRO project aims to diagnose mental disorders through a series of decisions made by participants in a computer game. These algorithms find subtle differences – too subtle for humans to appreciate – that differentiate those with depression, those with bipolar disorder, and those with neither. Although in early stages, this research ultimately aims to replace the ‘subjective’ diagnosis of mood disorders, motivated by the fact that most patients with bipolar disorder are initially misdiagnosed with depression in current clinical practice.14 Moreover, since a solution like this can be delivered digitally, it could even improve access to healthcare.15
Soon, in addition to diagnosis, machine learning could also transform the management of diseases. The AI supercomputer IBM Watson can already pour through decades of accumulated data on a patient to generate an accurate problem list (alongside relevant medical literature) in seconds.16 This aspect of AI could remove the “data clerk role” that every hospital intern would be familiar with – something we should certainly look forward to.17
However, if we are not careful, the same features of AI that invite progress can just as easily breathe new dangers into existence. The same IBM Watson once suggested treatment with a monoclonal antibody for an oncology patient, without considering what the anticoagulant effect of this drug would have on the patient – who was already bleeding severely.18 If followed through, this AI-driven decision could have been unimaginably disastrous.
Even if we encode contraindications into algorithms, their internal decision-making process often remains unknown to us humans.19 This creates a ‘black box’ of complex statistical logic (potentially involving millions of variables) that cannot be explained in a transparent and accountable manner.20 Can we rely upon a decision we cannot explain – and could we defend a decision-gone-wrong on sound ethico-legal grounds?21 Uncomfortably, the ‘black box’ nature of algorithms continues to be a challenging hurdle in the safe implementation of AI.20
Moreover, the success of machine learning algorithms relies on two factors – the algorithm itself, and the data it is trained on.22 Where data is lacking, existing disparities in health could be exacerbated. To understand how this could happen, we can look to facial recognition systems. Due to darker-skinned women being under-represented in training data, commercial algorithms have been shown to accurately detect gender with an error of 35% for darker-skinned women, compared to only 0.8% for lighter-skinned men.23 Since ethnic minorities are less represented in medical datasets than White men,24 similar disparities could emerge when machine learning is applied to healthcare, provoking questions around fairness and justice.
In reality, no amount of AI computation can ever replace the integrity of human values. Conversely, our Mammalian brains cannot rival the exponential progression of AI computational power. Recognising the unique strengths of both parties, the World Medical Association in 2019 stated that AI in healthcare should refer instead to “augmented intelligence” to better reflect the unfolding reality.25 As prominent AI expert and cardiologist Eric Topol puts it, the ultimately goal is for “synergy” between humans and AI.18
And of course, the empathy and holistic social understanding clinicians bring to the table – that unique ‘human touch’ – is something no machine can ever emulate. As the American physician Francis Peabody once noted, the task of clinicians is to translate “that case of mitral stenosis in the second bed on the left” into “Henry Jones, lying awake [at] night while he worries about his wife and children.”26 Only we can understand Henry Jones as a human being rather than a disease – this ball remains well and truly in our court.
In the 21st century, clinicians and AI are fostering an evolving symbiotic relationship, with each bringing their unique talents to the table. Our challenge as clinicians will be to maintain our clinical acumen, social values and human empathy as we open The Next Chapter with cautious optimism.
References
- Yan A. How a robot passed China’s medical licensing exam China: South China Morning Post; 2017 [Available from: https://www.scmp.com/news/china/society/article/2120724/how-robot-passed-chinas-medical-licensing-exam.
- Hajkowicz SA, Karimi S, Wark T, Chen C, Evans M, Rens N, et al. Artificial Intelligence: Solving problems, growing the economy and improving our quality of life. Australia: CSIRO Data61; 2019.
- Gurría A. From the information revolution to a knowledge-based world: OECD Observer; 2012 [Available from: https://oecdobserver.org/news/fullstory.php/aid/3905/From_the_information_revolution_to_a_knowledge-based_world.html.
- Harari YN. Sapiens: A Brief History of Humankind. London, UK: Penguin Random House; 2011.
- Harari YN. Homo Deus: A Brief History of Tomorrow. London, UK: Penguin Random House; 2015.
- Benke K, Benke G. Artificial Intelligence and Big Data in Public Health. Int J Environ Res Public Health. 2018;15(12).
- Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism. 2017;69S:S36-S40.
- OUTBREAK. How can we prevent antimicrobial resistance? Combining AI and big data to track, trace and tackle AMR Australia: OUTBREAK; 2020 [Available from: https://outbreakproject.com.au/antimicrobial-resistance-solution/.
- Monash University. Google grant to establish world-first suicide monitoring system Melbourne, Australia: Monash University; 2019 [Available from: https://www.monash.edu/news/articles/google-grant-to-establish-world-first-suicide-surveillance-system.
- Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-8.
- Nam JG, Park S, Hwang EJ, Lee JH, Jin KN, Lim KY, et al. Development and Validation of Deep Learning-based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs. Radiology. 2019;290(1):218-28.
- Gulshan V, Rajan RP, Widner K, Wu D, Wubbels P, Rhodes T, et al. Performance of a Deep-Learning Algorithm vs Manual Grading for Detecting Diabetic Retinopathy in India. JAMA Ophthalmol. 2019.
- Beam AL, Kohane IS. Translating Artificial Intelligence Into Clinical Care. JAMA. 2016;316(22):2368-9.
- Purtill J. Australian researchers design computer game to diagnose depression and bipolar Australia: CSIROscope; 2019 [updated 10 Oct 2019. Available from: https://blog.csiro.au/computer-game-to-diagnose-depression-and-bipolar/.
- Blashki G. Would you trust AI with your mental health? Pursuit: University of Melbourne; 2019 [Available from: https://pursuit.unimelb.edu.au/articles/would-you-trust-ai-with-your-mental-health.
- Miller DD, Brown EW. Artificial Intelligence in Medical Practice: The Question to the Answer? Am J Med. 2018;131(2):129-33.
- Hsu J. Artificial Intelligence Could Improve Health Care for All—Unless it Doesn’t: TIME; 2019 [Available from: https://time.com/5650360/artificial-intelligence-health-care/.
- Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56.
- Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L. The ethics of algorithms: Mapping the debate. Big Data & Society. 2016;3(2).
- Australian Academy of Health and Medical Sciences. Artificial Intelligence in Health: Exploring the Opportunities and Challenges. Report from a Roundtable Meeting. Australian Academy of Health and Medical Sciences; 2020.
- London AJ. Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability. Hastings Cent Rep. 2019;49(1):15-21.
- Domingos P. A few useful things to know about machine learning. Communications of the ACM. 2012;55(10):78-87.
- Zou J, Schiebinger L. Design AI so that it’s fair. Nature. 2018;559:324-6.
- Nordling L. Mind the gap. Nature. 2019;573:S103-5.
- World Medical Association. WMA Statement on Augmented Intelligence in Medical Care. 70th WMA General Assembly, Georgia; 2019.
- Peabody FW. The Care of the Patient. JAMA. 1927;88(12):877-82.