Transforming Radiology and Diagnostic Imaging with AI

AI is transforming medical applications in radiology and diagnostic imaging by, in essence, harnessing the power of millions of second opinions.

Screen Shot 2018-04-20 at 12.42.10 PM.png

By training a new generation of machine learning models using the expertise of millions of highly trained and experienced physicians, AI models are increasingly outperforming any one doctor at many medical imaging tasks.

Without knowing a lot about what’s going on in medicine, some might think that new AI tools “just help” a doctor to look at an image, or listen to a breathing pattern, in that diagnostic moment. What clinicians understand, though, is that real clinical work exists in the context of “layers” - signals over periods of time, scans that slice the onion of the human head from top to bottom in thin segments, and other sophisticated types of new radiology and clinical testing that actually deliver quite a lot of “big data.”

Physicians aren’t just looking at “a picture” - more likely, a team is racing to sort through the layers of an MRI, or studying real-time heart rate and blood pressure data to try to work backward through a patient’s medical history.

In these types of situations, when the chips are down and the patient is on the table, ER teams, surgeons and other skilled medical professionals know that quick diagnosis and action is often the difference between life and death - or between a full recovery and paralysis or other deficits.

A detailed piece in the New Yorker last year shows how much of this technology is sought after and driven by a desire to save patients, as in the case of Sebastian Thrun, known for his tenure at Udacity and his work on driverless cars, who also went to work on medical AI to try to promote early diagnosis of early-stage cancers.

Reading articles like these that show clinicians at work, we understand that machine learning and artificial intelligence are building new models for interventionary care that will change how we see medicine in the near future. AI will never “replace” the radiologist - instead, it will inform doctors, speed up their work, and enhance what they do for patients.

What is Diagnostic Imaging?

The category of services known as ‘diagnostic imaging’ encompasses many different methods, tools and uses.

Diagnostic imaging includes x-rays, magnetic resonance imaging, ultrasound, positron emission tomography or PET, and computed tomography or CT scan, along with more specialized or smaller fields of diagnostic imaging, such as nuclear medicine imaging and endoscopy-related imaging procedures.

What all of these types of procedures have in common is that they look inside the body and provide a great opportunity for gaining insights from pattern recognition.

Radiology and diagnostic imaging enables us to peer inside layers of bone and tissue to spot conditions or changes happening nearly anywhere in the body.

Scientists of past centuries would have marveled at the many ways we effectively explore the inside of the human body to gain deep insights on treatment options.

Many Scenarios, Many Diseases

There’s a reason that every emergency department in the U.S. has some type of radiology testing installed on premises - it’s because the applications are diverse and radiology enables so many different kinds of interventionary work.

Doctors use different types of diagnostic imaging to look at bone breaks and figure out how to fix fractures or other tricky problems related to muscles, ligaments and tendons. They use diagnostic imaging in oncology to identify or track tumors. In traumatic injury situations, doctors can quickly scan parts of the body to learn more about the extent of the injury itself, and likely outcomes. And they use diagnostic imaging in evaluating patients in all stages of life, from the developing fetus to the geriatric patient.

The Life Cycle

Just as diagnostic imaging is used for many types of diseases and conditions that exist all over the body, it also gets used throughout a complex “life cycle” of evaluation.

 

This begins when a patient is initially seen by a healthcare provider, and the doctor orders a first diagnostic test. Medical experts will either find actionable results or not. If they do, the life cycle of tracking a particular condition begins, whether it's a growth (benign, malignant or unknown), a fracture, or some other kind of condition, and observations of its development help in forming a positive or a negative prognosis.

Throughout the diagnostic related care cycle, physicians are observing and understanding patterns. Pattern recognition is the key task for understanding the results of clinical scans.

For example, in assessing bone structures, the radiologist is looking carefully for not only evidence of a break or fracture, but evidence of a specific kind of break, and accurate locational details. Radiologists look to identify a complete, transverse, oblique, or spiral fracture, along with other kinds of complex breaks like a comminuted break or greenstick fracture. They also try to locate the affected area of the bone, assessing things like diaphysis, metaphysis and epiphysis for damage.

All of this requires detailed visual assessment of a complex pattern to map and structure what the radiologist sees.

Likewise, the radiologist in oncology will be looking at tissues in the body at a very detailed level, to spot the delineations of a biomass and try to predict its type (benign, malignant, etc.) and look at adjacent types of tissue that may be affected.

So how does AI and machine learning apply?

Supervised machine learning models work by learning a highly complex mathematical mapping through a large number of training examples

(for example lots of pictures of something like a growth, and a classification for each like whether it is benign or not). In this way, a machine learning model learns to “interpret” visual data accurately.

A helpful paper on “The Building Blocks of Interpretability” provides a rich explanation of feature visualization (what the models are using to form this mapping), dimensionality reduction (how to prevent confusion from too many variables) and other techniques that ML uses to fascilitate interpretation - but unlike many other sources, this paper visually shows how layers of mathematical neurons interpret many iterations of a picture, to come up with detailed labeling that supports pattern recognition. Like the radiologist’s observation, the machine learning interpretation process is intricate, sophisticated, a gentle dance of identification and meaning.

AI Helping in Multiple Practice Areas

shutterstock_53300002.jpg

In oncology, AI tools are helping to improve the diagnosis and treatment of many types of cancers – computer scientists are increasingly using neural networks to stage cancers, and to understand applications related to gene expression and other treatment options.

They’re also using artificial intelligence for tumor segmentation – in tumor segmentation, doctors seek to specifically delineate the different types of matter in the tissue, including solid or active tumor and necrotic tissue. They also identify any normal or healthy tissue and any other substances such as blood or cerebrospinal fluid that may be adjacent.

Neural networks show strong promise in predicting cancer and segmenting specific tumors for breast cancers, rectal cancers, and other categories of cancer that affect many Americans each year.

[Neural Network Techniques for Cancer Prediction, Procedia Computer Science]

Again, this is essentially a pattern recognition process. Tumor segmentation requires detailed manual segmentation requiring significant labor-intensive work looking. Many experts need to use a tool like Vannot to identify the precise contours of individual tumors and their cancer state. These annotations then enable a deep learning network to be trained so that it can act like the experts, and also outline tumors and determine whether they are benign or cancerous. It's something AI excels at and automates to a high degree and ultimately gives doctors powerful new tools to assist in diagnosis.

Doctors can also use artificial intelligence tools to detect pneumonia as in this Cornell University library paper where a technology called CheXNet outperformed a team of radiologists in some aspects of pneumonia diagnosis. The paper shows how visual patterns in various lobes of the lung area are indicative of pneumonia.

Machine learning technologies can also assess the actual physical brain to predict risk for neurological conditions, as in this 2015 paper by Adrien Payan and Giovanni Montana that explores using neuroimaging data for Alzheimer’s diagnosis.

The Eye as the Window to the Body

AI can also help with “derived diagnosis” – where data from one part of the body tells us about an entirely different part of the body. Consider, for example, this report on Google AI. Google’s new software can look at eye imaging, and spot signs of heart disease. This type of “cross-platform” analysis adds critical tools to the clinician’s arsenal.

shutterstock_356358107.jpg

Also, heart disease is not the only health problem that eye scans can predict. Doctors are using scans like retinal scans, iris scans and something new called Optical Coherence Tomography or OCT to review patients for all sorts of reasons, such as diagnosing glaucoma or retinal issues, or secondary conditions like diabetes that can trigger changes in the eye.

Some other uses of optical scans are meant to assess patients for mental health issues. Schizophrenia is one such malady that scientists are suggesting can be partially diagnosed, predicted or indicated through eye movement. The emergence of “eye tools” to capture eye movement or other optical data and feed it into ML/AI platforms constitutes one of the best examples of this technology at work.

All Medicine is Data-Driven

Even in some of the radiology applications that you wouldn't necessarily think of as pattern-driven, artificial intelligence can play a big role.

One of the most common examples of this is an ultrasound for OB/GYN doctors. In the course of a pregnancy, doctors typically have a number of scans that show fetal development – and you think of the results as something that’s neat to show the family, or something that’s used as a general assurance that the fetus is developing properly. But to doctors, this isn't just a binary evaluation. They're not just looking for whether the fetus is okay or isn't okay – they're looking at very detailed things, like the amount of amniotic fluid in the scan, and the exact positioning of the fetus as well as its constituent parts.

With all of this in mind, artificial intelligence enhances what clinicians can do and enables new processes and methods that can save lives.

Human and Machines in Collaboration

With these technologies, and with that very important human oversight, we are increasingly leveraging the enormous power of human and machine collaboration.  There’s a wealth of potential when humans and machines work together efficiently -- you're putting the brains of smart doctors together with the knowledge base and cognitive ability of smart technologies: what comes out is the sum total of human and machine effort.

We humans have adapted symbiotically in the past. Consider the human driver seated on a plow, reins in hands, managing and leveraging the manual power of horses to till.

This can be a very instructive metaphor when it comes to the collaboration AI technologies provide. Developing this synergy requires the creation of tight feedback loops, where expert clinicians' natural activities provide the data and instruction to machines, who in turn, tirelessly, and rapidly, reduce the burden of repetition and open the doors to higher efficiency and efficacy.

It’s essential to get all of the right data in play. Companies like Xyonix working on the cutting edge of medical AI, tap into the data sources - for instance, from medical sensors like a digital otoscope, or clinical IT systems like EHR/EMR vendor systems. When all of this comes together seamlessly, it opens the doors to a whole host of powerful innovations. That’s something exciting, and at Xyonix, it’s something we are proud to be a part of. AI is re-inventing radiology in terms of the quality and speed of diagnosis and the quality and speed of care. These are potentially life-saving and life-enhancing technologies. The goals of any health system is to improve outcomes, and with the addition of new tools and resources, the medical world is taking great strides in the business of healing.