With the advent of computers and Artificial Intelligence (AI) in the 1940s and 1950s, the medical community swiftly recognised their potential benefits. In 1959, Keeve Broadman and his colleagues predicted that machines could perform diagnostic interpretation of symptoms. By 1960, William B Schwartz stated in a journal that computers would become a powerful extension of physicians’ intellect by the year 2000.
During the 1990s and early 2000s, despite using slow computers with limited memory, considerable progress was made in solving certain medical tasks that involved repetitive processes prone to human error. Computer reading of electrocardiograms (ECGs), white cell differential counts, retinal photographs, skin lesions, and other image processing techniques became a reality.
Today, AI has already gained acceptance in the medical field, particularly in the interpretation of ECGs, plain radiographs, CT and MRI scans, skin images, and retinal images. Numerous research studies suggest that AI can perform on par with or even better than humans in key healthcare tasks, including disease diagnosis. Machine learning, a process of fitting models to data and learning through training, is widely employed in healthcare, particularly in precision medicine. It enables the prediction of treatment protocols likely to succeed based on patient attributes and treatment context.
One advanced form of machine learning, neural networks, has been available in healthcare since the 1960s and has been utilised to determine patient disease acquisition probabilities. Deep learning, the most complex variant of machine learning, utilises multiple levels of features to predict outcomes. In the field of healthcare, deep learning has proven valuable in recognising potentially cancerous lesions in radiology images, leading to more accurate cancer diagnoses.
One area where AI has made significant strides is in the interpretation of medical images. In 2016, Geoffrey Hinton, an AI pioneer, predicted the extinction of radiologists within five years. However, widespread implementation of AI for image interpretation has not yet occurred in many high-income countries. Accurate interpretation of chest X-rays is vital for diagnosing tuberculosis, but limited access to physicians and diagnostics has hindered mass TB screening efforts. AI has the potential to address this diagnostic issue in countries like India by providing expertise comparable to that of radiologists. Beyond tuberculosis, this technology could be applied to other areas of rural medicine, offering automated instantaneous provisional interpretations as part of a comprehensive toolkit. Dr Saurabh Jha from the Department of Radiology at the University of Pennsylvania observes that India’s emphasis on digital health presents a fertile ground for AI development. The field of radiologic AI has garnered global interest, with commercial AI algorithms developed by companies in over 20 countries. Smartphone imaging, utilising a simple ultrasound probe attached to the base of a phone, can provide high-quality images of various body parts except for the brain. AI has the potential to enhance low-quality images, enabling accurate interpretation. For instance, echocardiograms with automatic interpretation have shown promise in diagnosing conditions like heart failure.
In 2015, Google initiated the development of a deep-learning-based AI system to detect diabetic-related changes in the retina, a vision-endangering condition known as diabetic retinopathy. This AI-based Automated Retinal Disease Assessment (ARDA) has screened over 2,50,000 individuals for diabetic retinopathy.
With sixty million diabetics in India, 12-18% of whom develop diabetic retinopathy, regular retinal screenings are crucial for detecting changes. Retinal images can be captured at diabetology clinics or by primary care providers. Deep learning algorithms grade retinal images according to severity, enabling appropriate referrals to ophthalmologists for further confirmation and treatment. For the development of these models, Google collaborated with three super-speciality eye hospitals in India, training an AI model on their extensive data sets of retinal images. The models were later validated by Aravind Eye Hospital and Sankara Eye Hospital, both based in Tamil Nadu. The model’s effectiveness was also observed across the diverse population in Thailand. It is essential to revalidate AI algorithms across ethnically diverse populations in various parts of India due to the country’s significant ethnic diversity.
Kasumi Widner senior programme manager, at Google Health, California USA, and her team are actively collaborating with various partners to reach more patients. They believe that the most promising path for scaling AI is to integrate it into existing screening programmes, thus augmenting the workflow of healthcare professionals. The ultimate challenge and opportunity lie in ensuring that AI truly adds value to patients and doctors, helping to refer positive cases appropriately without burdening the healthcare system.
(The author is a consultant haemato-oncologist with a special interest in stem cell transplantation at Royal Wolverhampton NHS Trust, UK. He can be reached at praveen.kaudlay1@nhs.net)