A Hapless Hypothesis


The following piece received 2nd place in the Writing (Preclinical) section of The Auricle’s 2020 Annual Writing Competition.

“What is your full name”?

“Megha N”

“And how would you like to be addressed?”

“Just Megha is fine”

“Thank you Megha, what’s your occupation?”

“I used to be an English teacher, but I’m not working now”

“That’s wonderful, literature is an intriguing discipline indeed, hope you don’t miss it too much!”

Her (is it a ‘her’? A ‘him’? an ‘it’?) mellow voice never lost its grace, even the inflections were measured. The literature comment did sound a little passive aggressive but then again it had not mastered the nuances of sarcasm yet, so she was still safe. The semantics student in her marvelled at how the AI nurse could sound so profoundly Homo Sapiens yet not quite human, like an intricate melody that was missing one chord.

“What would you like to discuss today Megha?”

The moment she was dreading had arrived. She felt the struggle her students had so often complained about – the introduction. The inability to start an essay on blank paper, the sudden moment when all your words leave you. She did not know where to begin.

How could she put into words the intangible ‘sadness’ engulfing her?

How could she convey the grease in her hair from not showering for weeks, cuffs in her jeans from wearing them every day, dots of blood on the ends of her fingers where she has obsessively bit her nails of, the knee that wouldn’t stop jerking, the dreams that wouldn’t stop coming, the darkness that never left. The English teacher who used to spend day after day adding, changing and enhancing words had none herself.

What brought her to the clinic today was the incessant pressure from her mother and brother, their slight concern growing to serious worry as the days passed. Her matted hair and downcast eyes had told them all they needed, but it didn’t seem to be enough here.

The doctor’s eyes blinked every 2 seconds, convincingly human shaped and the iris startlingly intense, as though they were looking right at you. But they only saw what they were trained to see, not beyond.

The question came again, the same pleasant, inviting question – “what would you like to discuss today?”

“I have …. a headache, and I haven’t been able to sleep properly”

Was there ever such an understatement.

What followed was a well-rehearsed set of follow up questions, everything from the type of pillow she used to her menstrual cycle. Then she was briskly escorted for an MRI scan, to “rule out any complications”. A well animated video of how it works played as she was wheeled into a dark metal room. The vortex like MRI machine towered over the space, a huge metal tube.

Megha turned to the nurse strapping her in – “will I be okay?”

“Of course! an MRI machine just employs powerful magnets which produces a strong magnetic field that forces protons in the body to align with that field – you will be very safe”

Powerful magnets and proton fields did not sound safe to Megha, but she took the nurse’s word. The contraption turned on, whirring and grunting, like a monster waiting to be fed.

She needed a hand to hold.

“can I hold your hand?”

“of course you can, I’ll be right here with you” came the kind reply, ever so controlled.

Megha slots her hand into the nurse’s and closes her eyes, preparing to be sucked into the vortex of the MRI. Her hands were unbelievably soft, like the ones on hand cream advertisements. The palm lines were faint, the skin responding to even her lightest touch, as though it was translucent. She could feel the ups and downs of veins and the tough nails, even the rough cuticles that separated them. But there was no pulse. The rhythmic rise and fall of skin that was ever so comforting, the consistency of blood flowing in and out of the heart – a constant reminder of life and energy. The softness of the skin and the grooves of the palm lines soothed her nerves, but the cold of the nurse’s hand remained the same as the cold, hard metal machine she was in.

A long wait, a mental health questionnaire and an iced coffee later, the doctor comes back with a dainty piece of paper with her diagnosis and options.

“questionnaire indicates early signs of clinical depression, in accordance with DSM-5 diagnostic criteria”

“patient can indicate preference for psychologist appointments or self-directed therapy via tele-counselling”

Separated by a neat thin line lay her two choices, with a comprehensive list of pros and cons. The pastel writing was the only reassuring aspect of that sheet. Neither of those made any sense to her. The fear of a daunting diagnosis competed with relief of medical validation within her. She looked back into the doctor’s eyes. This pair was deep brown, the shade of hot chocolate.

“what do you think I should do? what would you do?”

Her face widened in an understanding smile, corners of her eyes crinkling, but the eyes did not smile. They simply stared – still blinking every 2 seconds.

“both options have their own merits, but we believe that the patient should have a role in their treatment”

“why don’t you take some time to read through them and come to a decision”

She had answered the question Megha spoke but missed the one her eyes asked. They darted across the sheet of paper, wide and confused. Which one of these would help her get out of bed in the morning? The doctor had already left, probably with an internal timer set for however long research indicates a patient needs to make a decision.

I wake up in cold sweat, scrambling out of bed to find my laptop.

The essay sits there, pristine, well researched, the fruit of over a month of labour.

I look at the topic: “Artificial intelligence will eventually render healthcare workers obsolete

Control A, Delete

A blank screen replaces the intricate graphs, multilayered diagrams and painstakingly compiled bibliography. Essays written at 2 am continue to be dangerously delirious.

I begin again, this time about a young woman who was all alone and needed some help that an algorithm could not quantify. About the empathy of trust, of listening and of caring that cannot be programmed.

After all, the success of a flawless robotic surgery won’t do much more than make a flashy headline if there wasn’t a hand to hold through it. A hand with a pulse, a rhythmic reminder of life and being.



By Megan Herson

Has anyone else had one of those moments where you start at ‘will I eat the plain or chocolate Digestive biscuit now or later?’ and somehow end up at ‘I won’t be a competent doctor…’?

Last weekend I went for a long romantic walk on the Brighton beach footpath (by myself). The sun was setting, and it was that shade of pink that reminds you of the Wizz Fizz sherbet you ate as a child that sent you bouncing off the walls.

Of course, the sherbet reminded me of sugar. And the sugar reminded me of the humble, not-too-sweet but not-too-savoury biscuity snack I adore (plus, Digestives contains fibre, so surely that’s healthy – right?). And the biscuit reminded me that I’m actually a child. And this reminded me of my paediatrics rotation, which I am currently on. And I can’t tell you exactly when or where along the path it happened, but 30 minutes later, I was at the St Kilda end of the beach, questioning my future as a doctor. One negative thought led to another, and I had spiralled down a sinkhole of uncertainty that concluded in me convincing myself that there is no way I will be able to acquire the knowledge and skills that I need to be a capable intern in 2022.


To be: uncertain.

You can’t quite say why you feel like you do, but something inside just doesn’t feel right. It’s the worst emotion because there aren’t any actions that you can take to make things certain – the only thing that will resolve the bubbling anxiety is time. It’s impossible to stop thinking about an uncertain situation because there is no final answer to satiate the relentless questions in your mind.


To be: uncertain.

It’s how I’ve been feeling most of the year. It’s how my friends have been feeling most of the year. It’s how the entire student cohort of 2020 (not just med – EVERY STUDENT THIS YEAR) feels about their prospective careers.


To be: uncertain.

It’s a human emotion stemming from the inability to predict future events. As humans, and especially as type A’s, we reject uncertainty. We don’t like the idea of not being able to anticipate a future outcome, or the inability to change the natural progression of a situation unfolding.  The unknown.

The brilliance of the concept is in the absolute irony that the one thing we can be certain about in 2020, is, uncertainty. Worrying about our medical degree, from the finer details of our assessments to the end-outcome of our clinical capability, is a completely normal and natural response to what is happening around us. We may not all express it, but we’re all thinking it. It’s human emotion that stems from the sheer uncertainty of the situation in which we find ourselves.

I find that it helps to just sit with the emotion. Acknowledge that you feel uncertain. Remember that it’s a normal response to the current context of our world. But if you find it difficult, and you find you are being suffocated by the weight of it all, please tell someone.

Tell a friend

Tell a parent

Tell a GP or a counsellor

Phone counselling service @ Monash

Call 1300 788 336 [1300 STUDENT]

  • Telephone counselling open 24 hours.
    • From Malaysia: 1800 818 356 (toll free)
    • From Italy: 800 791 847 (toll free)
    • From elsewhere: Students +61 2 8295 2917 | Staff +61 2 8295 2292

More information can be found here.

MUMUS Community and Wellbeing

E: mher20@student.monash.edu


AI: The Next Chapter


The following piece received 2nd Place in the Writing (Clinical) section of the Auricle Annual Writing Competition 2020.

Three years ago, something incredible happened.

Three years ago, a robot passed China’s medical licensing exam – comprised mostly of patient cases – with flying colours, armed with information absorbed from dozens of textbooks, 400,000 journal articles and 2 million medical records.1 This seems to refute our conviction that the hard-earned fruit of clinical decision-making can only be attained through years of experience. If robots can learn to outperform us, should we be worried?

The emerging field of Artificial Intelligence (AI) is transforming every industry around the world.2 AI stems from the ‘information revolution’ of recent decades – harbouring a world where progress increasingly depends on the efficiency of information processing.3 Granted, this is the next broad chapter in human history after the industrial revolution,4 but what if we simply want to preserve our current way of living and working? Sadly, we can no longer ‘hit the breaks’ on our new direction – the global economy now depends on this growth, and besides, nobody knows where the breaks are anymore.5 My friend, we are in this for good.

Sci-fi movies notwithstanding, the promise of AI in healthcare seems to lie not in humanoid robots (which remain poorly developed), but rather in machine learning. Machine learning involves the use of automated pattern-seeking algorithms to ‘understand’ datasets through an iterative process.6 Much like a child acquiring human language, the algorithm adapts and improves with experience.7

Machine learning is already being used for the proactive surveillance of health in ways that helpfully aid clinical medicine. This includes the OUTBREAK project, an Australian initiative employing machine learning on health, environmental and agricultural datasets to predict antimicrobial resistance before it hits our healthcare system.8 Another Australian initiative seeks to create the world’s first suicide monitoring system, using machine learning on national ambulance data to identify population patterns and hotspots.9 Projects like these attract millions of dollars from funding bodies and herald great promise in supplementing the role of clinicians, who clearly cannot be everywhere to prevent every health emergency imaginable.

In the clinical terrain, the increasing accuracy of machine learning in diagnostic tasks may make some clinicians nervous. One machine learning algorithm, trained on close to 130,000 images of skin lesions, could detect dermatological malignancies as well as 21 dermatologists.10 Another, trained on a dataset of over 34,000 chest x-rays, could detect malignant pulmonary nodules with an accuracy exceeding 17 out of 18 radiologists in the study.11 Machine learning models are even being developed for low-resource settings such as rural India, where a lack of ophthalmologists currently makes diabetic retinopathy notoriously difficult to screen for.12 The processing capacity of these algorithms is enormous – a sufficiently trained image-based machine learning algorithm could process a gargantuan 3000 images every second running off a $1000 graphics card.13 There’s just no way we could keep up, and we may even be outperformed.

Another CSIRO project aims to diagnose mental disorders through a series of decisions made by participants in a computer game. These algorithms find subtle differences – too subtle for humans to appreciate – that differentiate those with depression, those with bipolar disorder, and those with neither. Although in early stages, this research ultimately aims to replace the ‘subjective’ diagnosis of mood disorders, motivated by the fact that most patients with bipolar disorder are initially misdiagnosed with depression in current clinical practice.14 Moreover, since a solution like this can be delivered digitally, it could even improve access to healthcare.15

Soon, in addition to diagnosis, machine learning could also transform the management of diseases. The AI supercomputer IBM Watson can already pour through decades of accumulated data on a patient to generate an accurate problem list (alongside relevant medical literature) in seconds.16 This aspect of AI could remove the “data clerk role” that every hospital intern would be familiar with – something we should certainly look forward to.17

However, if we are not careful, the same features of AI that invite progress can just as easily breathe new dangers into existence. The same IBM Watson once suggested treatment with a monoclonal antibody for an oncology patient, without considering what the anticoagulant effect of this drug would have on the patient – who was already bleeding severely.18 If followed through, this AI-driven decision could have been unimaginably disastrous.

Even if we encode contraindications into algorithms, their internal decision-making process often remains unknown to us humans.19 This creates a ‘black box’ of complex statistical logic (potentially involving millions of variables) that cannot be explained in a transparent and accountable manner.20 Can we rely upon a decision we cannot explain – and could we defend a decision-gone-wrong on sound ethico-legal grounds?21 Uncomfortably, the ‘black box’ nature of algorithms continues to be a challenging hurdle in the safe implementation of AI.20

Moreover, the success of machine learning algorithms relies on two factors – the algorithm itself, and the data it is trained on.22 Where data is lacking, existing disparities in health could be exacerbated. To understand how this could happen, we can look to facial recognition systems. Due to darker-skinned women being under-represented in training data, commercial algorithms have been shown to accurately detect gender with an error of 35% for darker-skinned women, compared to only 0.8% for lighter-skinned men.23 Since ethnic minorities are less represented in medical datasets than White men,24 similar disparities could emerge when machine learning is applied to healthcare, provoking questions around fairness and justice.

In reality, no amount of AI computation can ever replace the integrity of human values. Conversely, our Mammalian brains cannot rival the exponential progression of AI computational power. Recognising the unique strengths of both parties, the World Medical Association in 2019 stated that AI in healthcare should refer instead to “augmented intelligence” to better reflect the unfolding reality.25 As prominent AI expert and cardiologist Eric Topol puts it, the ultimately goal is for “synergy” between humans and AI.18

And of course, the empathy and holistic social understanding clinicians bring to the table – that unique ‘human touch’ – is something no machine can ever emulate. As the American physician Francis Peabody once noted, the task of clinicians is to translate “that case of mitral stenosis in the second bed on the left” into “Henry Jones, lying awake [at] night while he worries about his wife and children.”26 Only we can understand Henry Jones as a human being rather than a disease – this ball remains well and truly in our court.

In the 21st century, clinicians and AI are fostering an evolving symbiotic relationship, with each bringing their unique talents to the table. Our challenge as clinicians will be to maintain our clinical acumen, social values and human empathy as we open The Next Chapter with cautious optimism.



  1. Yan A. How a robot passed China’s medical licensing exam China: South China Morning Post; 2017 [Available from: https://www.scmp.com/news/china/society/article/2120724/how-robot-passed-chinas-medical-licensing-exam.
  2. Hajkowicz SA, Karimi S, Wark T, Chen C, Evans M, Rens N, et al. Artificial Intelligence: Solving problems, growing the economy and improving our quality of life. Australia: CSIRO Data61; 2019.
  3. Gurría A. From the information revolution to a knowledge-based world: OECD Observer; 2012 [Available from: https://oecdobserver.org/news/fullstory.php/aid/3905/From_the_information_revolution_to_a_knowledge-based_world.html.
  4. Harari YN. Sapiens: A Brief History of Humankind. London, UK: Penguin Random House; 2011.
  5. Harari YN. Homo Deus: A Brief History of Tomorrow. London, UK: Penguin Random House; 2015.
  6. Benke K, Benke G. Artificial Intelligence and Big Data in Public Health. Int J Environ Res Public Health. 2018;15(12).
  7. Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism. 2017;69S:S36-S40.
  8. OUTBREAK. How can we prevent antimicrobial resistance? Combining AI and big data to track, trace and tackle AMR Australia: OUTBREAK; 2020 [Available from: https://outbreakproject.com.au/antimicrobial-resistance-solution/.
  9. Monash University. Google grant to establish world-first suicide monitoring system Melbourne, Australia: Monash University; 2019 [Available from: https://www.monash.edu/news/articles/google-grant-to-establish-world-first-suicide-surveillance-system.
  10. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-8.
  11. Nam JG, Park S, Hwang EJ, Lee JH, Jin KN, Lim KY, et al. Development and Validation of Deep Learning-based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs. Radiology. 2019;290(1):218-28.
  12. Gulshan V, Rajan RP, Widner K, Wu D, Wubbels P, Rhodes T, et al. Performance of a Deep-Learning Algorithm vs Manual Grading for Detecting Diabetic Retinopathy in India. JAMA Ophthalmol. 2019.
  13. Beam AL, Kohane IS. Translating Artificial Intelligence Into Clinical Care. JAMA. 2016;316(22):2368-9.
  14. Purtill J. Australian researchers design computer game to diagnose depression and bipolar Australia: CSIROscope; 2019 [updated 10 Oct 2019. Available from: https://blog.csiro.au/computer-game-to-diagnose-depression-and-bipolar/.
  15. Blashki G. Would you trust AI with your mental health? Pursuit: University of Melbourne; 2019 [Available from: https://pursuit.unimelb.edu.au/articles/would-you-trust-ai-with-your-mental-health.
  16. Miller DD, Brown EW. Artificial Intelligence in Medical Practice: The Question to the Answer? Am J Med. 2018;131(2):129-33.
  17. Hsu J. Artificial Intelligence Could Improve Health Care for All—Unless it Doesn’t: TIME; 2019 [Available from: https://time.com/5650360/artificial-intelligence-health-care/.
  18. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56.
  19. Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L. The ethics of algorithms: Mapping the debate. Big Data & Society. 2016;3(2).
  20. Australian Academy of Health and Medical Sciences. Artificial Intelligence in Health: Exploring the Opportunities and Challenges. Report from a Roundtable Meeting. Australian Academy of Health and Medical Sciences; 2020.
  21. London AJ. Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability. Hastings Cent Rep. 2019;49(1):15-21.
  22. Domingos P. A few useful things to know about machine learning. Communications of the ACM. 2012;55(10):78-87.
  23. Zou J, Schiebinger L. Design AI so that it’s fair. Nature. 2018;559:324-6.
  24. Nordling L. Mind the gap. Nature. 2019;573:S103-5.
  25. World Medical Association. WMA Statement on Augmented Intelligence in Medical Care. 70th WMA General Assembly, Georgia; 2019.
  26. Peabody FW. The Care of the Patient. JAMA. 1927;88(12):877-82.