Image description: A robot on a blue background.
Few scientific concepts have captured the public imagination as artificial intelligence has. The phrase often evokes images of a distant future in which humanoid robots are capable of working work alongside—or against—humans. But the use of artificial intelligence (AI) in our day-to-day lives is not as far off as it may seem: Particularly in medical practice, new research programmes aiming to improve the diagnosis of diseases using AI are already in place.
One such programme, headed by Prof. Gleeson at the University of Oxford, focusses on accelerating the diagnosis of lung cancer. Lung cancer is currently the leading cause of cancer deaths worldwide, and proper screening by X-ray or CT scans is key for improving patient survival. However, the careful evaluation of scans is laborious and error-prone. Working together with NHS clinicians and three industrial partners, the Oxford programme aims to combine clinical, imaging, and molecular data from blood tests to more accurately and rapidly diagnose lung cancer. Also, the programme will evaluate clinical risk factors to determine which are the most pertinent, and will thus hopefully contribute to a new set of standards for which at-risk populations should be selected for screening.
So far, standards have been based on statistical methods, which characterise patterns within data as mathematical equations and use these to gain insights from past data. In contrast, AI, or “machine learning”, can identify patterns in data that cannot be reduced to an equation. Just like doctors first need to learn how to recognise patterns, analyse images, and make diagnoses by weighing the evidence, AI algorithms must also learn how to perform these tasks.
To this end, they are fed structured data, which has a label (e.g. “cancer” or “not cancer”) that the algorithm can recognise. Once the algorithm has been exposed to sufficient data, its accuracy in labelling new, unstructured data can be tested. After they are properly trained, such systems are extremely accurate, with one AI application able to outperform dermatologists in correctly classifying skin cancers. Similar accuracy has been observed in other AI systems, such as Google’s LYNA algorithm, which classifies cancerous and noncancerous lymph node biopsies correctly 99% of the time.
AI, or “machine learning”, can identify patterns in data that cannot be reduced to an equation…after they are properly trained, such systems are extremely accurate, with one AI application able to outperform dermatologists in correctly classifying skin cancers
However, in addition to demonstrating a high accuracy and efficacy, AI used for medical outcomes must also be suitable for use in the clinic. Several hurdles need to be overcome for an AI algorithm to obtain the necessary regulatory approval and broad acceptance among both patients and doctors needed for successful implementation. Currently, acceptance criteria for clinical trials and medical devices include that the scientific methods used are well-explored and transparent. This is difficult for AI algorithms as often, the “thought process” an AI uses to classify objects after learning is not known to the scientists who created it. The invisibility of the operations an AI performs may be an obstacle to regulatory approval, and increasing the transparency would help ensure that patient data is being handled correctly and accurately.
More transparency in how the algorithms work may also be a route to gaining approval among patients and doctors new medical devices need. Patients tend to rightfully distrust new medical devices, especially when they claim to be able to replace a doctor’s consultation. The human qualities of empathy and feeling their concerns are taken seriously should not be underestimated in patients’ perceptions of how trustworthy a diagnosis is, and therefore in their cooperation with treatment. AI may thus be more suited to handling essential tasks in the background, such as classifying X-ray or CT scans as cancerous or healthy, while patient management and treatment coordination are left to human doctors.
The human qualities of empathy and feeling their concerns are taken seriously should not be underestimated in patients’ perceptions of how trustworthy a diagnosis is…AI may thus be more suited to handling essential tasks in the background
Despite the use of AI in medicine being touted as one of the most promising areas of innovation in healthcare services, challenges thus remain. These include regulatory complications, scepticism among doctors and patients alike, and the fact that the people creating algorithms often aren’t doctors themselves, sparking doubts on whether computer scientists are equipped to create algorithms for use in the clinic.
However, the fact that AI excels at defined tasks makes them very promising for certain medical applications. With AI quickly reaching the forefront of medicine, we can only wonder whether human doctors will remain as relevant in the future as they are today. Yet, it is becoming evident that doctors likely don’t need to fear being replaced by a machine: Medicine is not limited to routine diagnostic tasks, but includes communication of diagnoses, consideration of the patient’s needs and wishes, education, rapid decision-making, and surgical procedures. Tapping into AI’s potential may just make the whole process a little easier—for doctors and patients alike.
Artwork by Tian Chen, paying to homage to Malika Favre.
Liked reading this article? Don’t forget to share it on social media!