Drones to Robot Farm Hands, AI Transforms Agriculture

shutterstock_1162991080.jpg

Swarms of drones buzz overhead, while robotic vehicles plod across the landscape. Orbiting satellites capture high-resolution multi-spectral images of the vast scene below. Not a single human  can be seen in the sprawling acres. Today’s agriculture is rapidly revamping into a high-tech enterprise that most 20th-century farmers could hardly recognize. It was only 100 years ago that farming transitioned from animal power to combustion engines. In the last 20 years, the global positioning system (GPS), electronic sensors among other new tools have moved farming even further into a technological wonderland. And now, robots empowered with artificial intelligence can zap weeds with extraordinary precision, while other autonomous machines move with industrious efficiency across farms.

It is no secret that the global population is expected to rise to 9.7 billion by 2050. To meet expected food demand, global agricultural output needs to increase 70%. AI is helping make that goal possible (1). It is clear a change is coming as farms are seeing an 86% decrease in labor force just in the U.S., while the number of farms continue to rise (2). While today’s agricultural technologies and AI capabilities are evolving at a rapid rate, this evolution is just beginning. Factors such as climate change, an increasing population and food security concerns have propelled the industry into seeking more innovative approaches to assure an improving crop yield.

From detecting pests to predicting which crops will deliver the best returns, artificial intelligence can help humanity oppose one of its biggest challenges: feeding an additional 2 billion people by 2050 without harming the planet.

AI is steadily emerging as an essential part of the agricultural industry’s technological evolution including self-driving machinery and flying robots that are able to automatically survey and treat crops. AI is assisting these machines in interacting together so they can begin to frame the future of fully automated agriculture. The purpose of all this high-tech gadgetry is optimization, from both economic and environmental standpoints. The goal is to only apply the optimal amount of any input (water, fertilizer, pesticide, fuel, labor) when and where it’s needed to efficiently produce high crop yields (3).

shutterstock_1160507371 (1).jpg

With AI bringing all components of agriculture together we can discuss how autonomous machines and drones are driving driving the future of agriculture. A future where precision robots and drones will work simultaneously to manage entire farms.

Autonomous machines can replace people performing laborious and endless tasks, such as hand-harvesting vegetables. These robots use sensor technologies, including machine vision that can detect things like the location and size of stalks/leaves to inform their mechanical processes.

In addition, the development of flying robots (drones) gives way to the possibility that most field-crop scouting currently done by humans could be replaced. Many scouting tasks, such as scouting for crop pests, require someone to walk long distances in a field, and turn over plant leaves to see the presence or absence of insects. Researchers are developing technologies to enable such flying robots to scout without human involvement. An example of this is PEAT, a Berlin-based agricultural tech startup; PEAT has developed a deep learning application called Plantix that identifies potential defects and nutrient deficiencies in plants and soil. Analysis is then conducted using machine learning and software algorithms which correlate particular foliage patterns with certain soil defects, plant pests and diseases (4). The image recognition app identifies possible defects through images captured by the user’s smartphone. Users are then provided with soil restoration techniques, tips and other potential solutions with a 95% accuracy.

Another company focused on bringing autonomous AI machinery to agriculture is Trace Genomics which focuses on machine learning for diagnosing soil defects. The California-based company provides soil analysis services to farmers. The system uses machine learning to provide clients with a sense of their soil’s strengths and weaknesses. The system attempts to prevent defective crops and maximize healthy crop production. According to the company’s website,

after submitting a sample of their soil to Trace Genomics, users receive a summary of their soils contents. Services provided in their packages range from a pathogen screening focused on bacteria and fungi to a comprehensive microbial evaluation (5).

These autonomous robots combined with drones will define the future of AI in agriculture while AI and machine learning model are helping ensure the future of crops starting from the root up.

shutterstock_1239558490.jpg

It will take more than an army of robotic tractors to grow and harvest a successful crop. In the next 10 years, the agricultural drone industry will generate 100,000 jobs in the U.S. and $82 billion in economic activity, according to a Bank of America Merrill Lynch Global Research (6).

From spotting leaks to patrolling for pathogens, drones are taking up chores on the farm. While the presence of drones in agriculture dates back to the 1980s for crop dusting in Japan, the farms of the future will rely on machine learning models that guide the drones, satellites, and other airborne devices providing data about their crops on the ground.

As farmers try to adapt to climate change and other factors, drones promise to help make the entire farming enterprise more efficient. For instance, Descartes Labs, is employing machine learning to analyze satellite imagery to forecast soy and corn yields. The New Mexico startup collects 5 terabytes of data every day from multiple satellite constellations, including NASA and the European Space Agency (7). Combined with weather readings and other real-time inputs, Descartes Labs reports it can predict cornfield yields with high accuracy. Its AI platform can even assess crop health from infrared readings.

With the market for drones in agriculture projected to reach $480 million by 2027 (8), companies are also looking to bring drone technology to specific vertical areas of agriculture. VineView, for example, is looking to bring drones to vineyards. The company aims to help farmers improve crop yield and reduce costs (9). A farmer pre-programs a drone’s route and once deployed the drone leverages computer vision to record images which are used for later analysis.

VineView analyzes captured imagery to provide a detailed report on the health of the vineyard, specifically the condition of grapevine leaves. Since grapevine leaves are often telltales for grapevine diseases (such as molds and bacteria), reading the “health” of the leaves is often a good indicator for understanding the health of the plants and their fruit as a whole.

The company declares that its technology can scan 50 acres in 24 minutes and provides data analysis with high accuracy (10). This aerial imaging combined with AI techniques and machine learning platforms are the start of something that is being referred to as “precision agriculture”.

Precision agriculture (PA) is an approach to farm management that uses information technology to certify that crops and soil receive exactly what they need for optimum health and productivity. The goal of PA is to ensure profitability, sustainability and environmental protection. Since insecticide, for example, is only going to exactly where it is needed, environmental runoff is markedly reduced.

Precision agriculture requires three things to be successful: physical tools such as tractors and drones, site-specific information acquired by these machines, and it requires the ability to understand and make decisions based on that site-specific information.

Decision-making is often aided by AI based computer models that mathematically and statistically analyze relationships between variables like soil fertility and crop yield.  Self-driving machinery and flying robots able to automatically survey and treat crops will become commonplace on farms that practice precision agriculture. Other examples of PA involve varying the rate of planting seeds in the field according to soil type and using AI analysis and sensors to identify the presence of weeds, diseases, or insects so that pesticides can be applied only where needed. The Food and Agriculture Organization of the United Nations estimates that 20 to 40 percent of global crop yields are lost each year to pests and diseases, despite the application of millions of tons of pesticides, so finding more productive and sustainable farming methods will benefit billions of people (11).

shutterstock_1196120776.jpg

Deere & Company recently announced it would acquire a startup called Blue River Technology for a reported $305 million. Blue River has developed a “see-and-spray” system that leverages computer vision, a technology we here at Xyonix deploy regularly, to discriminate between crops and weeds. It hits the former with fertilizer and blasts the latter with herbicides with such precision that it is able to eliminate 90 percent of the chemicals used in conventional agriculture. It’s not just farmland that’s getting a helping hand from robots and artificial intelligence. A California company called Abundant Robotics, spun out of the nonprofit research institute SRI International, is developing robots capable of picking apples with vacuum-like arms that suck the fruit straight off the trees in the orchards (12). Iron Ox, out of San Francisco, is developing one-acre urban greenhouses that will be operated by robots and reportedly capable of producing the equivalent of 30 acres of farmland. Powered by artificial intelligence, a team of three robots will run the entire operation of planting, nurturing, and harvesting the crops (13). Vertical farming startup Plenty, also based in San Francisco, uses AI to automate its operations, and got a $200 million vote of confidence from the SoftBank Vision Fund earlier this year. The company claims its system uses only 1 percent of the water consumed in conventional agriculture while producing 350 times as much produce (14). Plenty is part of a new crop of urban-oriented farms, including Bowery Farming and AeroFarms.

Agricultural production has come so far in even the past couple decades that it’s hard to imagine what it will look like in a few more. But the pace of high-tech innovations in agriculture is only expanding.

Don’t be surprised if, 10 years from now, you drive down a rural highway and see small helicopters flying over a field, stopping to descend into the crop, use robotic grippers to manipulate leaves, cameras and machine vision looking for insects, and then rise back above the crop canopy and head toward its next location. All without human being in sight.

So what is in store for the future? Farmers can forecast that in the near future their drones and robots will have the AI capabilities to communicate about everything from crop assessment, counting cattle, monitoring crop diseases, water watching and mechanical pollination.  

Have agriculture data? Multi-spectral aerial imagery? Operational farm data? Need help mining your data with AI to glean insights? CONTACT us -- we might be able to help.

REFERENCES

  1. johndeerejournal.com/2016/03/agricultures-past-present-and-future/

  2. croplife.org/news/agriculture-then-and-now/

  3. theconversation.com/farmers-of-the-future-will-utilize-drones-robots-and-gps-37739

  4. peat.technology

  5. tracegenomics.com

  6. www.idtechex.com/research/reports/agricultural-robots-and-drones-2018-2038-technologies-markets-and-players

  7. www.crunchbase.com/organization/descartes-labs

    prnewswire.com/news-releases/agricultural-robots-and-drones-2017-2027-technologies-markets--players---agricultural-drone-market-to-be-worth-480-million---research-and-markets

  8. www.vineview.ca

  9. www.techemergence.com/ai-agriculture-present-applications-impact/

  10. www.technologyreview.com/s/610549/exclusive-alphabet-x-is-exploring-new-ways-to-use-ai-in-food-production

  11. singularityhub.com/2017/10/30/the-farms-of-the-future-will-run-on-ai-and-robots/#sm.00001idn7rzx17d24xg1212od7vm5

  12. ironox.com

  13. plenty.ag

Helping At Home Healthcare Patients with Artificial Intelligence

shutterstock_1060638626.jpg

Until very recently, a caregiving parent possessed few at home health tools beyond a simple thermometer. Then, as the internet developed, so too did online healthcare sites such as WebMD, offering another very powerful tool — information. At home health tools continue to rapidly undergo massive changes, and now it’s AI leading the way. Today a parent can look inside their child’s ear and receive help treating an ear infection, or an elderly person can conduct their own hearing test without ever leaving the house, and often, with intelligent machines operating behind the scenes. Increasingly smart at home health devices are evolving through the rapid proliferation of AI and the increased embrace of digital medicine. These new tools include devices like smart stethoscopes that automatically detect heartbeat abnormalities or AI powered otoscopes that can look in a person’s ear and detect an infection.

Imagine a world where at home AI healthcare tools get smarter and more able to heal you every day. These tools are incredibly data driven — where they are continuously collecting data off your body, about your environment, your nutrition and activity — and then these algorithms are continuously learning from this data

not just from you, but from millions of other patients and doctors who know how to make sense of this information.

These AI tools will then deliver personalized healthcare tips and remediation throughout your whole life. Perhaps one day without you having to set foot in a brick and mortar hospital.

AI can help wherever the care provider is identifying patterns, for example whenever a physician identifies the acoustic pattern of a heart murmur, the visual pattern of an ear infection image, or the contours and shapes of a carcinogenic skin lesion.

What if AI could help you or a doctor predict a deteriorating heart condition? “If you can go to the hospital and say, ‘I’m about to have a heart attack,’ and you have proof from an FDA-approved product, it is less costly to treat you,” said author and ABI Principal Analyst Pierce Owen (1). Other at home healthcare tools are becoming smarter everyday with tools such as EEG Headbands that can monitor your workout and vitals, Smart Beds and devices such as EarlySense that detect movement in your sleep and give you detailed data driven reports on a variety of vitals and how much sleep-and deep sleep-you are actually getting or smart baby monitors that allow parents to monitor newborn vitals. (2)(3)(4)

One significant way AI at-home healthcare is taking off is by helping parents with young children.

Parents can never get answers quickly enough when something is wrong with their child. So what if they never even had to drive to the doctors office?

According to the National Institute of Deafness and Other Communication Disorders (NIDCD), 5 out of 6 children experience ear infections by the time they are 3 years old. That’s nearly 30 million trips to the doctor’s office a year just for ear infections in the U.S. alone. Additionally, ear infections cost the US Health System 3 billion per year.

shutterstock_214202323.jpg

This is where companies like Cellscope step in. A pioneer in the otoscope industry, Cellscope has had success launching it’s otoscope, Oto Home. Oto Home is a small smartphone peripheral device that slides onto the users iPhone accompanied by an app. Once inside the child’s or patient’s ear the app's software recognition feature called the Eardrum Finder begins to direct the user to move and tilt the scope to capture the visuals a physician will need to attempt a diagnosis. After the session, the user enters some basic information about the patient and both the recording and the information is sent to a remote physician who reviews the data and if necessary can prescribe medication.(5) This same image used by the remote physician, can, be used by an artificial intelligence system to assist the physician with a diagnosis. The use of the AI system can decrease the costs of more expensive tests, in addition to identifying more refined possible diagnoses.

AI in healthcare can now also detect heartbeat abnormalities that the human ear cannot always initially detect. Steth IO captures exactly the premise of what the company’s goal is: “see what you cannot hear”. One study found that doctor’s across three countries could only detect abnormal heart sounds about 20% of the time.(6)

By using thousands of various heartbeat sounds, our Xyonix data scientists trained the Steth IO AI tool to “learn” how to tell which sounds are out of the norm. After the system takes in the encrypted and anonymized heartbeat recordings, it sends back a classification like “normal” or “murmur” to help assist the physician in their diagnosis.

Since patients can see and hear their heart and lung sounds, patient engagement is also a bonus for physicians. Steth IO also differentiates itself from other emerging AI healthcare tools by integrating the bell of the stethoscope directly into the iPhone so there is no need for Bluetooth or pairing and it displays all results in real time (8).

While this is currently only operated by physicians, as the at home healthcare space rapidly grows, we expect to see similar heartbeat abnormality detection abilities tailored for at home use so that you can check the health of you and your loved ones.

shutterstock_727368211.jpg

Virtual AI driven health care systems are also quickly making their way into people’s homes. Take for example HealthTap, which brings quality medical service to people around the world who lack the ability to pay. How it works: patients receive a free consultation via video, voice, or text. Then,

“Dr. A.I.”, their new artificial intelligence powered “physician”, converses with the patient to identify key issues and worries the patients is having. Dr. A.I then uses general information about the patient and applies deep learning algorithms to assess their symptoms and apply clinical expertise

that attempts to direct the user to an appropriate type and scale of care. (9)

Dr. AI isn't the only new AI that can give you healthcare advice from the comfort of your home. CareAngel launched its AI virtual nurse assistant, Angel. Their goal is to reduce hospital readmissions by continuously giving medical advice and reminders between discharges and doctors visit. Healthcare providers can also use angel to check in on patients, support medication adherence and check their patient’s vitals. (10) Ultimately this AI technology strives to significantly reduce the administrative and operational costs of nurse and call center outreach.

In a world where healthcare is meeting resistance from rising costs, we can see that the emergence of innovations in AI and digital health is expected to redefine how people seek care and how physicians operate. The goals and visions of most emerging health companies currently are simple:

allow new suppliers and providers into the healthcare ecosystem, empower the patient and provider using real-time data and connection and take on lowering general and long-term healthcare costs.

While healthcare has always been patient centered, AI is taking patients from a world with episodic in clinic interactions to more regular, on demand and in home care provider / patient interaction.

Trying to make your medical device smarter? Need Help with Your AI Needs? CONTACT us -- we might be able to help.

References

  1. https://homehealthcarenews.com/2018/06/explosion-in-artificial-intelligence-coming-for-home-care-and-hospitals/

  2. https://brainbit.com

  3. https://www.bloomberg.com/research/stocks/private/snapshot.asp?privcapId=22743939

  4. https://www.engadget.com/2017/08/10/nanit-ai-baby-monitor-impressions/

  5. https://www.mobihealthnews.com/38969/cellscopes-iphone-enabled-otoscope-remote-consultation-service-launches-for-ca-parents

  6. https://www.geekwire.com/2018/smartphone-stethoscope-maker-steth-io-launches-ai-assistant-help-doctors-detect-heart-problems/

  7. https://exponential.singularityu.org/medicine/wp-content/uploads/sites/5/2018/11/Steth-IO-uses-AI-to-improve-physicians’-confidence-in-their-diagnoses.pdf

  8. https://www.dr-hempel-network.com/digital-health-technolgy/smartphone-based-digital-stethoscope/

  9. https://hitconsultant.net/2017/01/10/healthtap-launches-doctor-a-i/

  10. https://www.crunchbase.com/organization/care-angel#section-overview

Vannot - Video Annotation Tool for Object Segmentation

A significant challenge in teaching machines to automatically analyze, understand and glean object related insights from video is how to efficiently and accurately prepare large amounts of examples used to train and evaluate models. With frame rates around 30 to 60 fps, accurately labelling objects in even small time spans of video can be extremely time consuming and expensive.

Screen Shot 2018-10-25 at 4.10.50 PM.png

Today, we have the pleasure of introducing you to Vannot—an open source, web based, and easy to integrate video annotation tool we created to help efficiently annotate objects for use in machine learning video segmentation model construction. Vannot takes advantage of the relative similarity of nearby frames to enable efficient object annotation in a web context with geographically distributed labelers.

We took inspiration from some of the industry’s most venerable drawing and illustration applications, and reframed them in close consideration of the workflow processes involved with annotating a large amount of video data. It is easy, for example, to advance a few frames or seconds and carry over the most recent shapes and annotations, so that all you have to do each time is make a few small adjustments. More advanced features are available, as well: it's possible to group adjacent or disjoint shapes into the same instance if, for example, an object is composed of many parts or is obscured behind some interloper.


We're interested and excited for you to use Vannot in your own efforts, and hopefully to contribute back to this free and open source project. We've strived to make it very easy to integrate — Vannot is just a webpage: HTML, CSS, and Javascript. You configure it with the URL you use to load the page. More information on using, integrating and developing Vannot can be found on GitHub at github.com/xyonix/vannot.


Have a look at the video below to see Vannot in action. In this video, Vannot designer and sailor Clint Tseng, walks through the preparation of sailing related training data like hull, jib and main sail object segmentation.

10 Ways AI is Doing Good & Improving the World

If you're paying attention to the tech media, you've probably heard a lot of the doomsday prophecies around artificial intelligence. A lot of it is scary, but despite some valid concerns, AI is doing a lot of good.

Medical treatment, reduced traffic jams, faster disaster recovery, and safer communities –  it’s all coming your way, thanks to the tremendous power of neural networks.

Check out this list of pioneering technologies for good.

1. Fighting Deforestation – AI for the Environment

For years we've talked about preserving these precious habitats yet real progress always seemed to be just out of reach. Unsustainable logging and widespread deforestation have devastated pristine natural spaces, and too often, we feel like there's little we can do about it.

Screen Shot 2018-05-22 at 6.53.56 PM.png

New artificial intelligence tools are helping. They are increasingly used to identify vulnerable landscapes, so that environmental programs can direct attention toward preservation.

A San Francisco-based nonprofit Rainforest Connection configures old smartphones to monitor sounds and installs them in the rainforests. The sound data is used to train machine learning algorithms to identify the threatening sounds of a chainsaw. Park rangers are alerted of suspicious activity in real time, helping to stop illegal deforestation.

By quickly identifying signs of deforestation, government and environmental agencies are better informed on forest locations at immediate risk; they can then react, often by adjusting enforcement, regulations and penalties.

2. Accessing our Past

New document surveying technologies leverage artificial intelligence to help us make sense of enormous amounts of data in both historical and government documents. Machine learning and artificial intelligence are revolutionizing the process of curation.

AI document tools, allow historians and curators to take a more ‘hands-off’ approach in assessing large volumes of information, and make artifacts easily accessible to people.

Countless historical handwritten documents sit on library shelves around the globe today, not readily accessible to researchers and academics in all countries. Digitizing these documents is the first step to making these records available. But it is impractical for a person to search through the immense catalog of information. That is, without machine learning.

A Berlin startup, omni:us is training neural networks to generate transcriptions of word images in documents from a collection of over a billion documents digitized from libraries all over the world!

Traditional techniques like individual researchers reading documents are failing to keep up with the exponential increase in new documents. AI systems that read like humans helps give us a big picture of what's in a large corpus, or body of documents. Neural networks are increasingly used to extract high-level information (such as subject matter), and temporal changes like how people, organizations and places interact over time. This extracted information enables effective search, organization and understanding of often billions of records.

3. Easing the Strain of Mental Health

Studies have found that around one fifth of all Americans have some form of mental health problem or need mental health services in any given year. So how do we attack this epidemic and develop meaningful solutions through technology?

shutterstock_1070632916.jpg

Can you get better therapy from a smart robot? Understanding the value of artificial intelligence here involves looking at how these systems function.

Back in the 1960s, the first chatbot named Eliza was developed at MIT. Eliza was developed to act as a “Rogerian psychologist,” taking in conversation and mirroring some of what patients say back to them.

The code for ELIZA was not sophisticated. This artifact from Vintage Computer shows the program written in BASIC - it’s the classic simple chatbot, reading user input, applying some simple rules, and continuing the conversation with a reply to the user. Despite this simplicity, Eliza proved mesmerizing to users.

So if something as simple as ELIZA could engage a user in conversation, how far can new machine learning tools take the conversation?

Now, state-of-the-art chatbots like Andrew Ng’s ‘Woebot’ are offering cognitive behavioral therapy through improved conversational understanding. The chatbot uses natural language processing technology to process what the patient says and prompts them to talk through their feelings and apply coping skills, such as rephrasing a negative statement in a more positive light.


This type of technology may be used in conjunction with seeing mental health professionals; perhaps people too reluctant to see a human therapist will be more open to “seeing” a virtual therapist.

4. Hacking Crop Yields

Artificial intelligence is helping us adapt to our changing world by examining crop yields around the world. Algorithmic crop yield tools can pinpoint crop projections with noteworthy accuracy. In this study from Stanford, we see remote-sensing data run through a convolutional neural network to provide a crop yield map that, when tested, produces excellent results.

AI is used to show us where the land is most fertile, where dangerous conditions might exist, to forecast crop yields, and ultimately tell us where to plant crops. And it’s all contributing to feeding our planet.

5. Automated Harvests

Agriculture is vitally important for our world, and natural foods are important for our health. Maintaining a low cost abundant food supply is essential for feeding humanity during consistent population increases.

Farmers are now using machine learning tools and robotics to help reduce the amount of fruit and vegetables that go to waste in the fields.

We've all heard sad stories of fruit rotting on the vine; grapes, apples and other crops remaining unpicked often due to labor shortages. We rely on foreign labor for much of our harvesting; this may prove unsustainable in the long run.

Screen Shot 2018-05-23 at 9.37.14 AM.png

Agricultural robots like those from HarvestCroo use intelligent computer vision algorithms to automate the picking of fruits and vegetables. Also, consider Blue River’s “See and Spray” technology. See and Spray uses computer vision to provide individualized plant care, doing away with the technique of broadcast spraying chemicals in the crop fields. The new technology avoids spraying the actual crops and reduces the volume of herbicides used by 90%. It is optimizing the application of herbicides, and at the same time tackling the growing problem of weed resistance to herbicides.

Feeding the world's increasing population is a challenge -- a challenge that AI technologies are helping address.

6. AI in Transportation

We all know about autonomous vehicles, but what about traffic management? Artificial intelligence is contributing to many lesser-known advances in the transportation field.

Let's start with smart traffic lights. If you're a municipal planner, you know that traffic lights cost a lot of money to put in place – and you know how important they are for public health and safety, as well as keeping traffic moving.

Public planners see traffic as a kind of “biological” process – much like blood circulation in our bodies, traffic needs to move smoothly for a healthy road network.

Smart traffic lights go a long way toward delivering that overall health and productivity in our daily lives, by reducing traffic congestion and waiting time at intersections, and the resulting pollution. Companies like Surtac produce artificial-intelligence driven adaptive traffic lights that respond to changing traffic conditions by the second. Sitting in traffic jams might someday be a thing of the past.

7. Training and Therapy for the Disabled

AI is also showing promise enhancing the lives of patients with disabilities. For example, robotic technology is helping children overcome some of the traditional limitations that go along with cerebral palsy. MIT News illustrates some of the ground-breaking robotics at work.

Therapy for CP is typically a slow process. A lot of cerebral palsy patients need more therapy than they are getting; they need more hours of training to improve particular muscle movements and range of motion. Therapy is expensive, and a shortage of therapists exacerbates this problem.

The “Darwin” bot, made by scientists at the Georgia Institute of Technology, explores an alternative. The chatbot interacts with patients to help them improve their mobility over time.

Like the modern mental therapy chatbots discussed above, Darwin takes in inputs and doles out praise for positive work. The difference is that here, Darwin’s not looking through a text lexicon to interpret what someone is thinking – the cerebral palsy therapy robot is looking for specific body movements that are indicative of patient progress.

AI holds potential for training and healing our minds and bodies.

Perhaps this is why much AI research is devoted to advancing healthcare, and why so many healthcare professionals are excited.

8. Fighting crime

We've already talked about some of the aspects of “smart cities”. Here’s another that’s on the rise: smart policing.

AI tools can be used to serve as extra eyes and brains for police departments. Law enforcement officers around the country are readily accepting all the technology help they can get.

If you've seen the television show APB, where billionaire Gideon Reeves astounds the local police department with his crime prevention app, you already know a little bit about how this might work.

Companies like Predpol, the “Predictive Policing Company,” offer predictive policing tools with similar goals. Predpol decreases response times, relieves police officers of overtime shifts, and has been shown to actually reduce crime totals in municipalities.

Technologies like Predpol use ‘event data sets’ to train algorithms to predict what areas may need more police coverage in the future. The company stresses that no personal information is used in the process – and, Predpol doesn't use demographic information either.

Scrubbing these systems of demographic input helps to prevent the kind of discrimination and bias that makes people wary of using AI to “judge” people. In fact, typically they work knowing just the location, type and time of past crimes.

Predictive policing is just one aspect of public administration, among many others, that benefits from an AI approach.

9. Improving education

It’s evident that education has changed significantly over the past 30 years. Education has moved from lecture-focused to interactive hands-on learning experiences, and from the use of physical to digital documents and interactive software programs. There’s been a change from a few monolithic teaching modalities to a vast world of innovative learning opportunities.

shutterstock_704499727.jpg

Artificial intelligence is helping driving this change.

Consider Brainly, a platform termed as the world's ‘largest social learning’ community that connects millions of students from 35 countries by facilitating peer-to-peer learning. The on-demand educational value of the platform is driven by algorithms that sort through a mass of data, filter content, and present it where it’s most useful.

Also, check out how Thinkser Math is using groundbreaking AI to personalize math education. Tools like these are available for use in the classroom and at home.

10. Disaster Recovery

ML/AI advances provide insights into resource needs, predict where and what the next disaster might be, ultimately providing more effective damage control.

Consider this article from Becoming Human where you can see “disaster recovery robots working overtime,” a combination of surveillance drones and mobile rescue robots helping with the aftermath of forest fires, earthquakes and other natural disasters.

In addition, companies like Unitrends are pioneering systems that can help tell whether an event is really a disaster, or not. New AI technology can evaluate something like a power outage or downtime event to see whether it “looks like” a crisis or is just a fluke.

All of this can prove critically important when it comes to saving lives and minimizing the tragic damage that storms and other natural disasters cause.

More to Come

All of this shows that AI is indeed benefiting our world. The down sides of Terminator like nightmares are mostly hypothetical, but the upsides are already a reality.

Need help with an AI project that makes the world better? CONTACT us, we might be able to help.

Transforming Radiology and Diagnostic Imaging with AI

AI is transforming medical applications in radiology and diagnostic imaging by, in essence, harnessing the power of millions of second opinions.

Screen Shot 2018-04-20 at 12.42.10 PM.png

By training a new generation of machine learning models using the expertise of millions of highly trained and experienced physicians, AI models are increasingly outperforming any one doctor at many medical imaging tasks.

Without knowing a lot about what’s going on in medicine, some might think that new AI tools “just help” a doctor to look at an image, or listen to a breathing pattern, in that diagnostic moment. What clinicians understand, though, is that real clinical work exists in the context of “layers” - signals over periods of time, scans that slice the onion of the human head from top to bottom in thin segments, and other sophisticated types of new radiology and clinical testing that actually deliver quite a lot of “big data.”

Physicians aren’t just looking at “a picture” - more likely, a team is racing to sort through the layers of an MRI, or studying real-time heart rate and blood pressure data to try to work backward through a patient’s medical history.

In these types of situations, when the chips are down and the patient is on the table, ER teams, surgeons and other skilled medical professionals know that quick diagnosis and action is often the difference between life and death - or between a full recovery and paralysis or other deficits.

A detailed piece in the New Yorker last year shows how much of this technology is sought after and driven by a desire to save patients, as in the case of Sebastian Thrun, known for his tenure at Udacity and his work on driverless cars, who also went to work on medical AI to try to promote early diagnosis of early-stage cancers.

Reading articles like these that show clinicians at work, we understand that machine learning and artificial intelligence are building new models for interventionary care that will change how we see medicine in the near future. AI will never “replace” the radiologist - instead, it will inform doctors, speed up their work, and enhance what they do for patients.

What is Diagnostic Imaging?

The category of services known as ‘diagnostic imaging’ encompasses many different methods, tools and uses.

Diagnostic imaging includes x-rays, magnetic resonance imaging, ultrasound, positron emission tomography or PET, and computed tomography or CT scan, along with more specialized or smaller fields of diagnostic imaging, such as nuclear medicine imaging and endoscopy-related imaging procedures.

What all of these types of procedures have in common is that they look inside the body and provide a great opportunity for gaining insights from pattern recognition.

Radiology and diagnostic imaging enables us to peer inside layers of bone and tissue to spot conditions or changes happening nearly anywhere in the body.

Scientists of past centuries would have marveled at the many ways we effectively explore the inside of the human body to gain deep insights on treatment options.

Many Scenarios, Many Diseases

There’s a reason that every emergency department in the U.S. has some type of radiology testing installed on premises - it’s because the applications are diverse and radiology enables so many different kinds of interventionary work.

Doctors use different types of diagnostic imaging to look at bone breaks and figure out how to fix fractures or other tricky problems related to muscles, ligaments and tendons. They use diagnostic imaging in oncology to identify or track tumors. In traumatic injury situations, doctors can quickly scan parts of the body to learn more about the extent of the injury itself, and likely outcomes. And they use diagnostic imaging in evaluating patients in all stages of life, from the developing fetus to the geriatric patient.

The Life Cycle

Just as diagnostic imaging is used for many types of diseases and conditions that exist all over the body, it also gets used throughout a complex “life cycle” of evaluation.

 

This begins when a patient is initially seen by a healthcare provider, and the doctor orders a first diagnostic test. Medical experts will either find actionable results or not. If they do, the life cycle of tracking a particular condition begins, whether it's a growth (benign, malignant or unknown), a fracture, or some other kind of condition, and observations of its development help in forming a positive or a negative prognosis.

Throughout the diagnostic related care cycle, physicians are observing and understanding patterns. Pattern recognition is the key task for understanding the results of clinical scans.

For example, in assessing bone structures, the radiologist is looking carefully for not only evidence of a break or fracture, but evidence of a specific kind of break, and accurate locational details. Radiologists look to identify a complete, transverse, oblique, or spiral fracture, along with other kinds of complex breaks like a comminuted break or greenstick fracture. They also try to locate the affected area of the bone, assessing things like diaphysis, metaphysis and epiphysis for damage.

All of this requires detailed visual assessment of a complex pattern to map and structure what the radiologist sees.

Likewise, the radiologist in oncology will be looking at tissues in the body at a very detailed level, to spot the delineations of a biomass and try to predict its type (benign, malignant, etc.) and look at adjacent types of tissue that may be affected.

So how does AI and machine learning apply?

Supervised machine learning models work by learning a highly complex mathematical mapping through a large number of training examples

(for example lots of pictures of something like a growth, and a classification for each like whether it is benign or not). In this way, a machine learning model learns to “interpret” visual data accurately.

A helpful paper on “The Building Blocks of Interpretability” provides a rich explanation of feature visualization (what the models are using to form this mapping), dimensionality reduction (how to prevent confusion from too many variables) and other techniques that ML uses to fascilitate interpretation - but unlike many other sources, this paper visually shows how layers of mathematical neurons interpret many iterations of a picture, to come up with detailed labeling that supports pattern recognition. Like the radiologist’s observation, the machine learning interpretation process is intricate, sophisticated, a gentle dance of identification and meaning.

AI Helping in Multiple Practice Areas

shutterstock_53300002.jpg

In oncology, AI tools are helping to improve the diagnosis and treatment of many types of cancers – computer scientists are increasingly using neural networks to stage cancers, and to understand applications related to gene expression and other treatment options.

They’re also using artificial intelligence for tumor segmentation – in tumor segmentation, doctors seek to specifically delineate the different types of matter in the tissue, including solid or active tumor and necrotic tissue. They also identify any normal or healthy tissue and any other substances such as blood or cerebrospinal fluid that may be adjacent.

Neural networks show strong promise in predicting cancer and segmenting specific tumors for breast cancers, rectal cancers, and other categories of cancer that affect many Americans each year.

[Neural Network Techniques for Cancer Prediction, Procedia Computer Science]

Again, this is essentially a pattern recognition process. Tumor segmentation requires detailed manual segmentation requiring significant labor-intensive work looking. Many experts need to use a tool like Vannot to identify the precise contours of individual tumors and their cancer state. These annotations then enable a deep learning network to be trained so that it can act like the experts, and also outline tumors and determine whether they are benign or cancerous. It's something AI excels at and automates to a high degree and ultimately gives doctors powerful new tools to assist in diagnosis.

Doctors can also use artificial intelligence tools to detect pneumonia as in this Cornell University library paper where a technology called CheXNet outperformed a team of radiologists in some aspects of pneumonia diagnosis. The paper shows how visual patterns in various lobes of the lung area are indicative of pneumonia.

Machine learning technologies can also assess the actual physical brain to predict risk for neurological conditions, as in this 2015 paper by Adrien Payan and Giovanni Montana that explores using neuroimaging data for Alzheimer’s diagnosis.

The Eye as the Window to the Body

AI can also help with “derived diagnosis” – where data from one part of the body tells us about an entirely different part of the body. Consider, for example, this report on Google AI. Google’s new software can look at eye imaging, and spot signs of heart disease. This type of “cross-platform” analysis adds critical tools to the clinician’s arsenal.

shutterstock_356358107.jpg

Also, heart disease is not the only health problem that eye scans can predict. Doctors are using scans like retinal scans, iris scans and something new called Optical Coherence Tomography or OCT to review patients for all sorts of reasons, such as diagnosing glaucoma or retinal issues, or secondary conditions like diabetes that can trigger changes in the eye.

Some other uses of optical scans are meant to assess patients for mental health issues. Schizophrenia is one such malady that scientists are suggesting can be partially diagnosed, predicted or indicated through eye movement. The emergence of “eye tools” to capture eye movement or other optical data and feed it into ML/AI platforms constitutes one of the best examples of this technology at work.

All Medicine is Data-Driven

Even in some of the radiology applications that you wouldn't necessarily think of as pattern-driven, artificial intelligence can play a big role.

One of the most common examples of this is an ultrasound for OB/GYN doctors. In the course of a pregnancy, doctors typically have a number of scans that show fetal development – and you think of the results as something that’s neat to show the family, or something that’s used as a general assurance that the fetus is developing properly. But to doctors, this isn't just a binary evaluation. They're not just looking for whether the fetus is okay or isn't okay – they're looking at very detailed things, like the amount of amniotic fluid in the scan, and the exact positioning of the fetus as well as its constituent parts.

With all of this in mind, artificial intelligence enhances what clinicians can do and enables new processes and methods that can save lives.

Human and Machines in Collaboration

With these technologies, and with that very important human oversight, we are increasingly leveraging the enormous power of human and machine collaboration.  There’s a wealth of potential when humans and machines work together efficiently -- you're putting the brains of smart doctors together with the knowledge base and cognitive ability of smart technologies: what comes out is the sum total of human and machine effort.

We humans have adapted symbiotically in the past. Consider the human driver seated on a plow, reins in hands, managing and leveraging the manual power of horses to till.

This can be a very instructive metaphor when it comes to the collaboration AI technologies provide. Developing this synergy requires the creation of tight feedback loops, where expert clinicians' natural activities provide the data and instruction to machines, who in turn, tirelessly, and rapidly, reduce the burden of repetition and open the doors to higher efficiency and efficacy.

It’s essential to get all of the right data in play. Companies like Xyonix working on the cutting edge of medical AI, tap into the data sources - for instance, from medical sensors like a digital otoscope, or clinical IT systems like EHR/EMR vendor systems. When all of this comes together seamlessly, it opens the doors to a whole host of powerful innovations. That’s something exciting, and at Xyonix, it’s something we are proud to be a part of. AI is re-inventing radiology in terms of the quality and speed of diagnosis and the quality and speed of care. These are potentially life-saving and life-enhancing technologies. The goals of any health system is to improve outcomes, and with the addition of new tools and resources, the medical world is taking great strides in the business of healing.

Need Help with Your AI Needs? CONTACT us -- we might be able to help.