10 Ways AI is Doing Good & Improving the World

If you're paying attention to the tech media, you've probably heard a lot of the doomsday prophecies around artificial intelligence. A lot of it is scary, but despite some valid concerns, AI is doing a lot of good.

Medical treatment, reduced traffic jams, faster disaster recovery, and safer communities –  it’s all coming your way, thanks to the tremendous power of neural networks.

Check out this list of pioneering technologies for good.

1. Fighting Deforestation – AI for the Environment

For years we've talked about preserving these precious habitats yet real progress always seemed to be just out of reach. Unsustainable logging and widespread deforestation have devastated pristine natural spaces, and too often, we feel like there's little we can do about it.

Screen Shot 2018-05-22 at 6.53.56 PM.png

New artificial intelligence tools are helping. They are increasingly used to identify vulnerable landscapes, so that environmental programs can direct attention toward preservation.

A San Francisco-based nonprofit Rainforest Connection configures old smartphones to monitor sounds and installs them in the rainforests. The sound data is used to train machine learning algorithms to identify the threatening sounds of a chainsaw. Park rangers are alerted of suspicious activity in real time, helping to stop illegal deforestation.

By quickly identifying signs of deforestation, government and environmental agencies are better informed on forest locations at immediate risk; they can then react, often by adjusting enforcement, regulations and penalties.

2. Accessing our Past

New document surveying technologies leverage artificial intelligence to help us make sense of enormous amounts of data in both historical and government documents. Machine learning and artificial intelligence are revolutionizing the process of curation.

AI document tools, allow historians and curators to take a more ‘hands-off’ approach in assessing large volumes of information, and make artifacts easily accessible to people.

Countless historical handwritten documents sit on library shelves around the globe today, not readily accessible to researchers and academics in all countries. Digitizing these documents is the first step to making these records available. But it is impractical for a person to search through the immense catalog of information. That is, without machine learning.

A Berlin startup, omni:us is training neural networks to generate transcriptions of word images in documents from a collection of over a billion documents digitized from libraries all over the world!

Traditional techniques like individual researchers reading documents are failing to keep up with the exponential increase in new documents. AI systems that read like humans helps give us a big picture of what's in a large corpus, or body of documents. Neural networks are increasingly used to extract high-level information (such as subject matter), and temporal changes like how people, organizations and places interact over time. This extracted information enables effective search, organization and understanding of often billions of records.

3. Easing the Strain of Mental Health

Studies have found that around one fifth of all Americans have some form of mental health problem or need mental health services in any given year. So how do we attack this epidemic and develop meaningful solutions through technology?

shutterstock_1070632916.jpg

Can you get better therapy from a smart robot? Understanding the value of artificial intelligence here involves looking at how these systems function.

Back in the 1960s, the first chatbot named Eliza was developed at MIT. Eliza was developed to act as a “Rogerian psychologist,” taking in conversation and mirroring some of what patients say back to them.

The code for ELIZA was not sophisticated. This artifact from Vintage Computer shows the program written in BASIC - it’s the classic simple chatbot, reading user input, applying some simple rules, and continuing the conversation with a reply to the user. Despite this simplicity, Eliza proved mesmerizing to users.

So if something as simple as ELIZA could engage a user in conversation, how far can new machine learning tools take the conversation?

Now, state-of-the-art chatbots like Andrew Ng’s ‘Woebot’ are offering cognitive behavioral therapy through improved conversational understanding. The chatbot uses natural language processing technology to process what the patient says and prompts them to talk through their feelings and apply coping skills, such as rephrasing a negative statement in a more positive light.


This type of technology may be used in conjunction with seeing mental health professionals; perhaps people too reluctant to see a human therapist will be more open to “seeing” a virtual therapist.

4. Hacking Crop Yields

Artificial intelligence is helping us adapt to our changing world by examining crop yields around the world. Algorithmic crop yield tools can pinpoint crop projections with noteworthy accuracy. In this study from Stanford, we see remote-sensing data run through a convolutional neural network to provide a crop yield map that, when tested, produces excellent results.

AI is used to show us where the land is most fertile, where dangerous conditions might exist, to forecast crop yields, and ultimately tell us where to plant crops. And it’s all contributing to feeding our planet.

5. Automated Harvests

Agriculture is vitally important for our world, and natural foods are important for our health. Maintaining a low cost abundant food supply is essential for feeding humanity during consistent population increases.

Farmers are now using machine learning tools and robotics to help reduce the amount of fruit and vegetables that go to waste in the fields.

We've all heard sad stories of fruit rotting on the vine; grapes, apples and other crops remaining unpicked often due to labor shortages. We rely on foreign labor for much of our harvesting; this may prove unsustainable in the long run.

Screen Shot 2018-05-23 at 9.37.14 AM.png

Agricultural robots like those from HarvestCroo use intelligent computer vision algorithms to automate the picking of fruits and vegetables. Also, consider Blue River’s “See and Spray” technology. See and Spray uses computer vision to provide individualized plant care, doing away with the technique of broadcast spraying chemicals in the crop fields. The new technology avoids spraying the actual crops and reduces the volume of herbicides used by 90%. It is optimizing the application of herbicides, and at the same time tackling the growing problem of weed resistance to herbicides.

Feeding the world's increasing population is a challenge -- a challenge that AI technologies are helping address.

6. AI in Transportation

We all know about autonomous vehicles, but what about traffic management? Artificial intelligence is contributing to many lesser-known advances in the transportation field.

Let's start with smart traffic lights. If you're a municipal planner, you know that traffic lights cost a lot of money to put in place – and you know how important they are for public health and safety, as well as keeping traffic moving.

Public planners see traffic as a kind of “biological” process – much like blood circulation in our bodies, traffic needs to move smoothly for a healthy road network.

Smart traffic lights go a long way toward delivering that overall health and productivity in our daily lives, by reducing traffic congestion and waiting time at intersections, and the resulting pollution. Companies like Surtac produce artificial-intelligence driven adaptive traffic lights that respond to changing traffic conditions by the second. Sitting in traffic jams might someday be a thing of the past.

7. Training and Therapy for the Disabled

AI is also showing promise enhancing the lives of patients with disabilities. For example, robotic technology is helping children overcome some of the traditional limitations that go along with cerebral palsy. MIT News illustrates some of the ground-breaking robotics at work.

Therapy for CP is typically a slow process. A lot of cerebral palsy patients need more therapy than they are getting; they need more hours of training to improve particular muscle movements and range of motion. Therapy is expensive, and a shortage of therapists exacerbates this problem.

The “Darwin” bot, made by scientists at the Georgia Institute of Technology, explores an alternative. The chatbot interacts with patients to help them improve their mobility over time.

Like the modern mental therapy chatbots discussed above, Darwin takes in inputs and doles out praise for positive work. The difference is that here, Darwin’s not looking through a text lexicon to interpret what someone is thinking – the cerebral palsy therapy robot is looking for specific body movements that are indicative of patient progress.

AI holds potential for training and healing our minds and bodies.

Perhaps this is why much AI research is devoted to advancing healthcare, and why so many healthcare professionals are excited.

8. Fighting crime

We've already talked about some of the aspects of “smart cities”. Here’s another that’s on the rise: smart policing.

AI tools can be used to serve as extra eyes and brains for police departments. Law enforcement officers around the country are readily accepting all the technology help they can get.

If you've seen the television show APB, where billionaire Gideon Reeves astounds the local police department with his crime prevention app, you already know a little bit about how this might work.

Companies like Predpol, the “Predictive Policing Company,” offer predictive policing tools with similar goals. Predpol decreases response times, relieves police officers of overtime shifts, and has been shown to actually reduce crime totals in municipalities.

Technologies like Predpol use ‘event data sets’ to train algorithms to predict what areas may need more police coverage in the future. The company stresses that no personal information is used in the process – and, Predpol doesn't use demographic information either.

Scrubbing these systems of demographic input helps to prevent the kind of discrimination and bias that makes people wary of using AI to “judge” people. In fact, typically they work knowing just the location, type and time of past crimes.

Predictive policing is just one aspect of public administration, among many others, that benefits from an AI approach.

9. Improving education

It’s evident that education has changed significantly over the past 30 years. Education has moved from lecture-focused to interactive hands-on learning experiences, and from the use of physical to digital documents and interactive software programs. There’s been a change from a few monolithic teaching modalities to a vast world of innovative learning opportunities.

shutterstock_704499727.jpg

Artificial intelligence is helping driving this change.

Consider Brainly, a platform termed as the world's ‘largest social learning’ community that connects millions of students from 35 countries by facilitating peer-to-peer learning. The on-demand educational value of the platform is driven by algorithms that sort through a mass of data, filter content, and present it where it’s most useful.

Also, check out how Thinkser Math is using groundbreaking AI to personalize math education. Tools like these are available for use in the classroom and at home.

10. Disaster Recovery

ML/AI advances provide insights into resource needs, predict where and what the next disaster might be, ultimately providing more effective damage control.

Consider this article from Becoming Human where you can see “disaster recovery robots working overtime,” a combination of surveillance drones and mobile rescue robots helping with the aftermath of forest fires, earthquakes and other natural disasters.

In addition, companies like Unitrends are pioneering systems that can help tell whether an event is really a disaster, or not. New AI technology can evaluate something like a power outage or downtime event to see whether it “looks like” a crisis or is just a fluke.

All of this can prove critically important when it comes to saving lives and minimizing the tragic damage that storms and other natural disasters cause.

More to Come

All of this shows that AI is indeed benefiting our world. The down sides of Terminator like nightmares are mostly hypothetical, but the upsides are already a reality.

Need help with an AI project that makes the world better? CONTACT us, we might be able to help.

Transforming Radiology and Diagnostic Imaging with AI

AI is transforming medical applications in radiology and diagnostic imaging by, in essence, harnessing the power of millions of second opinions.

Screen Shot 2018-04-20 at 12.42.10 PM.png

By training a new generation of machine learning models using the expertise of millions of highly trained and experienced physicians, AI models are increasingly outperforming any one doctor at many medical imaging tasks.

Without knowing a lot about what’s going on in medicine, some might think that new AI tools “just help” a doctor to look at an image, or listen to a breathing pattern, in that diagnostic moment. What clinicians understand, though, is that real clinical work exists in the context of “layers” - signals over periods of time, scans that slice the onion of the human head from top to bottom in thin segments, and other sophisticated types of new radiology and clinical testing that actually deliver quite a lot of “big data.”

Physicians aren’t just looking at “a picture” - more likely, a team is racing to sort through the layers of an MRI, or studying real-time heart rate and blood pressure data to try to work backward through a patient’s medical history.

In these types of situations, when the chips are down and the patient is on the table, ER teams, surgeons and other skilled medical professionals know that quick diagnosis and action is often the difference between life and death - or between a full recovery and paralysis or other deficits.

A detailed piece in the New Yorker last year shows how much of this technology is sought after and driven by a desire to save patients, as in the case of Sebastian Thrun, known for his tenure at Udacity and his work on driverless cars, who also went to work on medical AI to try to promote early diagnosis of early-stage cancers.

Reading articles like these that show clinicians at work, we understand that machine learning and artificial intelligence are building new models for interventionary care that will change how we see medicine in the near future. AI will never “replace” the radiologist - instead, it will inform doctors, speed up their work, and enhance what they do for patients.

What is Diagnostic Imaging?

The category of services known as ‘diagnostic imaging’ encompasses many different methods, tools and uses.

Diagnostic imaging includes x-rays, magnetic resonance imaging, ultrasound, positron emission tomography or PET, and computed tomography or CT scan, along with more specialized or smaller fields of diagnostic imaging, such as nuclear medicine imaging and endoscopy-related imaging procedures.

What all of these types of procedures have in common is that they look inside the body and provide a great opportunity for gaining insights from pattern recognition.

Radiology and diagnostic imaging enables us to peer inside layers of bone and tissue to spot conditions or changes happening nearly anywhere in the body.

Scientists of past centuries would have marveled at the many ways we effectively explore the inside of the human body to gain deep insights on treatment options.

Many Scenarios, Many Diseases

There’s a reason that every emergency department in the U.S. has some type of radiology testing installed on premises - it’s because the applications are diverse and radiology enables so many different kinds of interventionary work.

Doctors use different types of diagnostic imaging to look at bone breaks and figure out how to fix fractures or other tricky problems related to muscles, ligaments and tendons. They use diagnostic imaging in oncology to identify or track tumors. In traumatic injury situations, doctors can quickly scan parts of the body to learn more about the extent of the injury itself, and likely outcomes. And they use diagnostic imaging in evaluating patients in all stages of life, from the developing fetus to the geriatric patient.

The Life Cycle

Just as diagnostic imaging is used for many types of diseases and conditions that exist all over the body, it also gets used throughout a complex “life cycle” of evaluation.

 

This begins when a patient is initially seen by a healthcare provider, and the doctor orders a first diagnostic test. Medical experts will either find actionable results or not. If they do, the life cycle of tracking a particular condition begins, whether it's a growth (benign, malignant or unknown), a fracture, or some other kind of condition, and observations of its development help in forming a positive or a negative prognosis.

Throughout the diagnostic related care cycle, physicians are observing and understanding patterns. Pattern recognition is the key task for understanding the results of clinical scans.

For example, in assessing bone structures, the radiologist is looking carefully for not only evidence of a break or fracture, but evidence of a specific kind of break, and accurate locational details. Radiologists look to identify a complete, transverse, oblique, or spiral fracture, along with other kinds of complex breaks like a comminuted break or greenstick fracture. They also try to locate the affected area of the bone, assessing things like diaphysis, metaphysis and epiphysis for damage.

All of this requires detailed visual assessment of a complex pattern to map and structure what the radiologist sees.

Likewise, the radiologist in oncology will be looking at tissues in the body at a very detailed level, to spot the delineations of a biomass and try to predict its type (benign, malignant, etc.) and look at adjacent types of tissue that may be affected.

So how does AI and machine learning apply?

Supervised machine learning models work by learning a highly complex mathematical mapping through a large number of training examples

(for example lots of pictures of something like a growth, and a classification for each like whether it is benign or not). In this way, a machine learning model learns to “interpret” visual data accurately.

A helpful paper on “The Building Blocks of Interpretability” provides a rich explanation of feature visualization (what the models are using to form this mapping), dimensionality reduction (how to prevent confusion from too many variables) and other techniques that ML uses to fascilitate interpretation - but unlike many other sources, this paper visually shows how layers of mathematical neurons interpret many iterations of a picture, to come up with detailed labeling that supports pattern recognition. Like the radiologist’s observation, the machine learning interpretation process is intricate, sophisticated, a gentle dance of identification and meaning.

AI Helping in Multiple Practice Areas

shutterstock_53300002.jpg

In oncology, AI tools are helping to improve the diagnosis and treatment of many types of cancers – computer scientists are increasingly using neural networks to stage cancers, and to understand applications related to gene expression and other treatment options.

They’re also using artificial intelligence for tumor segmentation – in tumor segmentation, doctors seek to specifically delineate the different types of matter in the tissue, including solid or active tumor and necrotic tissue. They also identify any normal or healthy tissue and any other substances such as blood or cerebrospinal fluid that may be adjacent.

Neural networks show strong promise in predicting cancer and segmenting specific tumors for breast cancers, rectal cancers, and other categories of cancer that affect many Americans each year.

[Neural Network Techniques for Cancer Prediction, Procedia Computer Science]

Again, this is essentially a pattern recognition process. Tumor segmentation requires detailed manual segmentation requiring significant labor-intensive work looking. Many experts need to use a tool like Vannot to identify the precise contours of individual tumors and their cancer state. These annotations then enable a deep learning network to be trained so that it can act like the experts, and also outline tumors and determine whether they are benign or cancerous. It's something AI excels at and automates to a high degree and ultimately gives doctors powerful new tools to assist in diagnosis.

Doctors can also use artificial intelligence tools to detect pneumonia as in this Cornell University library paper where a technology called CheXNet outperformed a team of radiologists in some aspects of pneumonia diagnosis. The paper shows how visual patterns in various lobes of the lung area are indicative of pneumonia.

Machine learning technologies can also assess the actual physical brain to predict risk for neurological conditions, as in this 2015 paper by Adrien Payan and Giovanni Montana that explores using neuroimaging data for Alzheimer’s diagnosis.

The Eye as the Window to the Body

AI can also help with “derived diagnosis” – where data from one part of the body tells us about an entirely different part of the body. Consider, for example, this report on Google AI. Google’s new software can look at eye imaging, and spot signs of heart disease. This type of “cross-platform” analysis adds critical tools to the clinician’s arsenal.

shutterstock_356358107.jpg

Also, heart disease is not the only health problem that eye scans can predict. Doctors are using scans like retinal scans, iris scans and something new called Optical Coherence Tomography or OCT to review patients for all sorts of reasons, such as diagnosing glaucoma or retinal issues, or secondary conditions like diabetes that can trigger changes in the eye.

Some other uses of optical scans are meant to assess patients for mental health issues. Schizophrenia is one such malady that scientists are suggesting can be partially diagnosed, predicted or indicated through eye movement. The emergence of “eye tools” to capture eye movement or other optical data and feed it into ML/AI platforms constitutes one of the best examples of this technology at work.

All Medicine is Data-Driven

Even in some of the radiology applications that you wouldn't necessarily think of as pattern-driven, artificial intelligence can play a big role.

One of the most common examples of this is an ultrasound for OB/GYN doctors. In the course of a pregnancy, doctors typically have a number of scans that show fetal development – and you think of the results as something that’s neat to show the family, or something that’s used as a general assurance that the fetus is developing properly. But to doctors, this isn't just a binary evaluation. They're not just looking for whether the fetus is okay or isn't okay – they're looking at very detailed things, like the amount of amniotic fluid in the scan, and the exact positioning of the fetus as well as its constituent parts.

With all of this in mind, artificial intelligence enhances what clinicians can do and enables new processes and methods that can save lives.

Human and Machines in Collaboration

With these technologies, and with that very important human oversight, we are increasingly leveraging the enormous power of human and machine collaboration.  There’s a wealth of potential when humans and machines work together efficiently -- you're putting the brains of smart doctors together with the knowledge base and cognitive ability of smart technologies: what comes out is the sum total of human and machine effort.

We humans have adapted symbiotically in the past. Consider the human driver seated on a plow, reins in hands, managing and leveraging the manual power of horses to till.

This can be a very instructive metaphor when it comes to the collaboration AI technologies provide. Developing this synergy requires the creation of tight feedback loops, where expert clinicians' natural activities provide the data and instruction to machines, who in turn, tirelessly, and rapidly, reduce the burden of repetition and open the doors to higher efficiency and efficacy.

It’s essential to get all of the right data in play. Companies like Xyonix working on the cutting edge of medical AI, tap into the data sources - for instance, from medical sensors like a digital otoscope, or clinical IT systems like EHR/EMR vendor systems. When all of this comes together seamlessly, it opens the doors to a whole host of powerful innovations. That’s something exciting, and at Xyonix, it’s something we are proud to be a part of. AI is re-inventing radiology in terms of the quality and speed of diagnosis and the quality and speed of care. These are potentially life-saving and life-enhancing technologies. The goals of any health system is to improve outcomes, and with the addition of new tools and resources, the medical world is taking great strides in the business of healing.

Need Help with Your AI Needs? CONTACT us -- we might be able to help.

 

Helping Physicians with AI, The Data Science of Health

Promising AI powered physician assistance tools are exciting because they change work models clinicians use to treat patients, improve medical outcomes, and save lives.

shutterstock_208844728.jpg

New medical technologies can seem like science fiction. For instance, if you've ever watched Star Trek, you likely saw characters use a “tricorder,” a device that can ‘scan’ individuals for signs of disease or conditions, interpret diagnostic information, and sometimes take corrective action -- all in more or less in real time.

The average Star Trek fan might not realize that many capabilities of the tricorder actually exist now.

New physician assistance tools use a similar model, based on the ability to combine various functions to streamline or automate medical work.

One way to think of this is in terms of three broad functions – the first one is the collection and aggregation of information, often through sensors. Sensor-based technologies have been around for a while, but they're quickly taking off in healthcare while being paired with other tools. They’re also evolving in how they collect health data. One example is the abundance of current tools that record physiological functions like heart rate in real time. Just a few years ago, these technologies were not widely available. Their recent emergence has brought vast change to healthcare, in the treatment and diagnosis of conditions like atrial fibrillation, and in general efforts to figure out whether a patient is experiencing either tachycardia or bradycardia, whether immediate or chronic.

The second type of broad function takes this data and transforms it into insights. In the medical world, this is often focused on diagnosis. Data by itself is not inherently meaningful, unless it assists a pattern of comprehension. Artificial Intelligence models excel at pattern recognition and are built to understand, and often to present, patterns for easier recognition to humans.

The third function transforms these insights into actions – by orienting or focusing clinical decisions and clinical work. This can mean training machines to provide relevant results – and training doctors to harness these results.

When both humans and machines are trained effectively, the collaborative results can be impressive.

Today's technologies don't look just like what's on Star Trek, but they have some of the functionality built in, and there's always the potential to enhance and improve on these capabilities over time.

The advent of machine learning and artificial intelligence, and the progress made over the last few years, has the potential to contribute to better medical outcomes for millions of patients. These new tools are based on a very different fundamental philosophy of care -- the idea that capable decision support tools can help doctors to improve their accuracy, and enhance what they can do in the exam room and in the operating room.

One of the best ways to judge how important new AI-driven medical systems are is to look at the numbers in terms of dollars spent. A study from Accenture shows current spending estimates of $40 billion for robot-assisted surgery, $20 billion for “virtual nursing assistants,” and $18 billion for “administrative workflow assistance.”

As the presentation of the study points out, these segments are generating this kind of capital for a reason; they’re bringing in revenue for adopters. That’s because they’re driving superior outcomes, advancing what clinicians are able to do in their fields.

Teamwork in Medicine

In some ways, the new use of machine learning in physician assistance programs echoes other types of progress that practice administrators are making in the medical world.

Today, when you visit a specialist, you're more likely to meet with a physician assistant than you would have been ten or twenty years ago (as seen in this resource from Barton Associates). These PAs are credentialed and qualified for specific kinds of clinical work, to assist the primary medical doctor.

Using this care model frees up valuable resources -- it enables the specialist office to see more patients, and to treat and consult with patients in more specific ways. For example, a skilled surgeon may spend more time in surgery. Meanwhile, the practice is typically able to provide a comparable level of care to patients – or in many cases, an elevated standard of care. Machine learning tools like those offered by Xyonix further enhance this process.

AI systems often serve as additional ‘team members’ of the practice structure -- this team member just happens to be extremely good at assimilating vast stores of knowledge and delivering insights extremely fast without ever tiring.

If human PAs are part of the doctor's team, so are the physician assistance software AI models that are doing more in the clinical world. The AI systems may be checking x-rays or scans to look for key indicators of a particular diagnosis. They may inspect skin for signs of cancer. They may listen for symptoms and signs of disease in audio streams. Whatever the AI systems are doing, they are contributing to the specific way a practice has set up its services to triage patient care -- to make sure that each particular patient gets exactly what he or she needs at a particular moment.

In addition, ML/AI tools like those we make at Xyonix are made to enhance human teamwork processes as well. Think of a surgeon who can get reviews from experts and others beyond the hospital walls, or a specialist who can converse with other specialists to figure out a tough diagnosis.

If you watch a doctor making the rounds and observe their interactions with patients and the patient's extended care group, you see that a physician operates in a team. Physicians consult one another and other care staff in myriad ways -- these team interactions contribute heavily to a physician's clinical decisions. Typically, however, a doctor is limited to the team that's physically in the hospital at the time. Physicians can refer to notes from other teams of doctors, but they often can't conveniently converse or discuss things with absent doctors. In traditional medicine, communications have been delayed for the purposes of consulting teams – and the speed of clinical decision can move quite slowly.

Many physician assistant tools help crowdsource input in sophisticated ways – they not only provide broad medical opinions based on large data analysis and statistics, but they often incorporate feedback a doctor gets while examining a patient, medical record or during the course of a clinical interaction. This crowdsourcing increasingly provides instructional training examples that power AI systems.

EMRs/EHRs and Beyond – Not Just a Template

Not too many years ago, the healthcare world was abuzz over electronic health record and electronic medical record technologies. There was a lot of excitement about how these digital platforms could help improve clinical care and treatment.

These technologies essentially provide digital interfaces for documenting patient information. In some small ways, they started to assist doctors, but often not on an insight-driven basis. Some of the smartest features of electronic health records were templates that would help doctors to input a common diagnosis -- or auto chart fillers that could help doctors choose the language and dictation content that they needed to fill out a patient chart.

The key is that none of this was driven by anything particularly intelligent. The templates and automation tools were all geared toward rote data entry. They did help doctors to streamline care documentation -- but that's mostly where their utility ended.

Nonetheless, through the HITECH act and related initiatives, the government promoted the use of these new digital tools as one of the first steps toward fully modern and futuristic medicine.

AI powered physician assistance software is transcending EHR tools -- artificial intelligence increasingly helps doctors better understand an individual patient's condition and treatment options.

In addition to improving individual patient care, machines also help physicians effectively treat a broader community of patients. There are different ways to affect AI driven progress, and some rest on particular approaches that match a given task. One common approach involves the technology of natural language processing.

You might call this the “physician talk” model – but it applies to both voice and data, although mining natural language for information can be easier with text than it is with voice. In voice, there’s the extra step of transcribing the audio to determine meaning and intent -- recent deep learning models trained on increasingly large volumes of data have made remarkable progress in accuracy.

NLP models, or parsers, can learn to understand what physicians are saying in dictation, as well as when they are writing into a chart. Machines are increasingly able to listen – and make use of what they hear. This passive data aggregation is a very important part of what’s behind some of these technologies – for instance, physician assistance tools can report insights to doctors, based on what they've said in the past. That might sound like a simple task, but it’s actually a powerful way to authenticate clinical work. Doctors are only human, and work according to their perceptions in linear time, typically seeing many patients in a given day. These technologies, on the other hand, can present wide aggregated data that condense fields of study into a unified perceptive model. For example, physicians might state whether a patient is exhibiting particular symptoms many different ways. Normalizing these permutations into a single representation enables a higher level aggregation fundamental to gaining a deeper understanding of symptom rates across a wide population of patients and physicians. 

Another model could be described as a "records-based” model – think about electronic medical records and the types of information they contain. How do you mine that information effectively?

Machine learning programs can tag bits of natural language for classification – by building highly complex classifications, they can see, for example how prescription drugs are prescribed to patients, what doctors find in examinations and consultations, and other key bits of information that can be replicated across an enormous number of charts.

Any discussion of physician-assisted tools wouldn’t be complete without image analysis and computer vision – diagnostic radiology is  an enormous and growing field within the medical industry. Doctors are relying on different types of images and scans for all sorts of clinical work, and AI methodologies that can help are going to be vitally important to new healthcare workflow models.

shutterstock_1031532121.jpg

When machines are applied to the scans and images, new technologies can be immensely effective in reading them in detail. A convolutional neural network, or CNN, can often provide excellent results that can, again, be extended across an infinite numbers of cases – this is the type of technology that's often in use when assisting physicians in assessing some visual items found in the scan – a cancerous lesion, or an outcome from invasive surgery, or anything else that can show up in CT scans, MRIs, x-rays or other types of diagnostic imagery.

Yet another model is a memory model that can be used to track clinical care. When nurses perform important interventions on patients, from tests to IVs and central lines, these actions could be recorded accurately in a comprehensive care narrative. Machine learning systems with memory, such as an LSTM setups of a recurrent neural network can be taught to “know” what has happened in a patient’s room, and deliver that on a timeline to clinicians and other stakeholders.

The Bedside Manner

Used correctly, new machine learning healthcare tools can provide a source of assurance for patients.

Patients need to trust their physicians, and many have an emotional connection with their doctor. Patient's also trust their doctors to harness modern technologies and evidenced driven care practices. People don't want to hear a diagnosis from a machine, but many of them might like to know their doctor has consulted an AI model trained by millions of top rate physicians proven to markedly outperform the average physician.

So when the medical provider can show that they have this kind of resource, it gives the patients and their families more peace of mind. It impresses on them that the medical business has the skill and ability to help their loved one through whatever condition he or she is facing.

Effective Use Cases

One important component of creating the best artificial intelligence PA applications is understanding when these tools can be the most helpful.

Think of it this way: when are doctors most like machines? When are physicians engaged in machine-like activities or behaviors? These are areas ripe for AI health innovation.

Clinical work is highly variable. Think of all of what a doctor might do in an average patient visit. Some of the core work is inherently “social” – doctors are explaining complex medical information to patients. That’s really not something that AI technologies should dominate -- at least, not yet, and perhaps, never.

On the other hand, when a doctor makes her way to the patient’s side and takes out a stethoscope – she gets quiet – and listens. At that particular moment, the work model switches from social to analytical. The doctor is then acting in a way that is “machine-like” – quantifying noise and substance in a signal pattern.

Those are the kinds of tasks to which artificial intelligence medical tools are well suited to assist. That’s a big part of what Xyonix is doing in the medical space – looking at these tasks, and automating them with a knowledge base and increasingly evolved AI.

Into the Future

Our new machine learning technologies are, by today's standards, pretty amazing -- but in many ways, they're really just the start.

There are all sorts of additional ways we can build on these ideas to give doctors new valuable insights -- we just haven't built them yet. We at Xyonix are contributing daily to this rapid progress -- this large leap into the future that will cause us to look back on the care of prior decades and marvel at what we've achieved and how far we’ve come.

These new care models will improve quality of life – they’ll increase longevity. They'll bring loved ones back to their families. They’ll do this, in general, by leveraging the power of distributed networks, the power of the medical community in general, and the resources that exist to fight disease, and bring them all of the way down to the individual point of care, the “front lines” where medical outcomes are created.

They’ll also help doctors to do more in a shorter period of time, which will help with the pressures and burdens put on top clinicians in the medical community. It's a win-win for the world, and we feel good about the work we’re doing to carry medicine into the future.

Need Help with Your AI Needs? CONTACT us -- we might be able to help.