Understanding Conversations in Depth through Synergistic Human/Machine Interaction

shutterstock_1149533141.jpg

Every day, billions of people communicate via email, chat, text, social media, and more. Every day, people are communicating their desires, concerns, challenges and victories. And every day, organizations struggle to understand this conversation so they can better service their customers.

Consider a few examples:

  • A communication system enables a famous politician or star to communicate with thousands or millions of constituents or fans

  • A product or service review system like Yelp gathers free form reviews from millions of people

  • An email system automatically conducts conversations with people after they fill out a form, stop by a booth, or otherwise indicate interest

  • An insurance company records millions of audio transcripts of conversations regarding a claim

  • A trend prediction system scans social media conversations to predict the next flavor food companies should plan for — in the past it was pomegranate, what will it be in 6 months?

In each of these cases, there is a need to automatically understand what is being said. Understanding a direct message rapidly can allow a system to elevate priority, compose a suggested reply, or even automatically reply on someone’s behalf. Understanding a large number of messages, can allow a politician to make sense of their massive inbox so they can better understand their constituency’s perspective on a given day or topic.

Understanding a large number of reviews can enable a surgeon to easily understand exactly what they are doing right, and where they should improve, or help a product manager understand what aspects of their products are well received and which are problematic.

Understanding the conversation begins with understanding one document. Once we can teach a machine to understand everything in a single document, we can project this understanding up to a collection, thread or larger corpus of documents to understand the broader conversation.

The anatomy of a single document is shown below. In it, we see a template for a document. A given document could be an email, a text or social media message, a blog post, a product review, etc. Typically a title or subject of some sort is present. Next, some document level descriptive information is often present like the author, date of the document, or perhaps a case # if it is a legal document. Next we have the body of the document, usually paragraphs composed of multiple sentences. In addition to the document content shown below, usually documents exists in a context — an email can be in reply to another email or a social media message can belong to a discussion thread. For simplicity, however, we’ll focus on a single document, and leave the inter-document discussion for later.

Screen Shot 2019-03-14 at 10.33.42 AM.png

Typically, much of this information is accessible in a machine readable form, but the unstructured text is not easily understood without some NLP (natural language processing) tailored AI accelerated by tooling like that in our Mayetrix SDK. From an AI vantage, there are multiple types of information we can train a machine to extract. Sentences are usually split by a model trained to do just that. We might use a sentence splitter trained on reasonably well composed text, like news, or, we might train a custom sentence splitter for more informal discourse styles like those present in social media or SMS. Next, individual key phrases or entities like specific people, places or things, might be present inside a sentence. We have multiple options for how to automatically extract phrases and entities, typically a combination of knowledge and example trained machine learning models. We also often extract sentence level insights. These might come in the form of categories a given sentence can be placed into. These might also come in the form of grammatical clause level information (think back to seventh grade grammar class), such as a source>action>target structure like LeBron James [nba_player] > score > final shot. Finally, there are document level insights we might extract, often assisted by the more granular information extraction described above. Document level information might include, for example, the overall sentiment, or a summarization of the document.

So how do we build AI or machine learning models for each of these types of information to extract?

Much like a toddler learns the word furniture through shown examples like a chair, sofa or table, AI text analysis systems require examples.

Before we can gather examples, however, we need to decide what exactly we are going to label. This might be easy in some cases, like building a spam detector — all email messages are either spam, or not spam. But in some cases, we have a significantly more complex task.

shutterstock_576832354.jpg

Consider for example the case where millions of constituents email their congressional representatives thoughts and opinions. We can further presume the busy congress person receiving thousands of emails a day wishes to understand the key perspectives worthy of their response. We might find through an early analysis that constituents are often expressing an emotion of some sort, an opinion on a topic or piece of legislation, requesting some specific type of action, or asking questions.

An initial task is to simply understand what the broader conversation consists of. In the chart below, we see that much of this conversation might consist of feedback, a question, or an emotional expression.

Screen Shot 2019-03-14 at 4.51.13 PM.png

These broader high level categories, might prove insufficient. We might ask what types of questions are being asked, and for some very important questions, we might want to know exactly which question is being asked, for example, where can I buy it? One approach we regularly use at Xyonix is to employ hierarchical label structures, or a label taxonomy. For example, for the referenced political corpus above, we might have a few entries like this:

  • suggestion/legislation_related_suggestion/healthcare_suggestion/include_public_option

  • question/legislation_related_question/healthcare_question/can_i_keep_my_doctor

  • feedback/performance_feedback/positive_feedback/you_are_doing_great

These hierarchical labels provide a few key advantages:

  • it can be easier to teach human annotators to label very granular categories

  • granular categories can be easily included under other taxonomical parents after labeling has commenced, thus preventing costly relabelling.

  • more granularity can result in very specific corresponding actions, like a bot replying to a question

Generating labels is often best done in conjunction with AI model construction. If AI models perform very badly at recognizing a select label, for example, it can often be a sign that the category is too broad or fuzzy. In this case, we may choose to tease out more granular and easily defined sub-categories.

In addition to defining labels, we also of course need to get to actual examples that our AI models can learn from. The next question is how do we select our examples since it is costly to have our humans label things? Should we just choose randomly, based on product or business priorities, or something more efficient? The reality is that not all examples are created equal. If a toddler is presented with hundreds of different types of chairs and told they are furniture, but never sees a table, then they’ll likely fail to identify a table as furniture. A similar thing happens with our models.

We need to present training examples that are illustrative of the target category but different from those the model has seen before.

This is why setting out arbitrary numerical targets like label 1 million randomly selected documents is rarely optimal. One very powerful technique which we use regularly at Xyonix accelerated by our Mayetrix platform is to create a tight feedback loop where mistakes a current model makes are identified by our human annotators and labeled correctly. The next model then learns from its prior mistakes, and improves faster than if only trained using random examples. The models tell the humans what they “think”, and the humans tell the models when they are wrong. When our human annotators notice the machines making many of the same mistakes, they provide more examples in that area, much the way a teacher might tailor problem sets for a student. The result overall, is a nice human / machine synergy. You can read about our data annotation platform or our annotation service if you wish to see how we label data at Xyonix .

shutterstock_1018377352.jpg

Once we have sufficient training data, we can begin optimizing our AI models so they are more accurate. This requires a number of steps, like:

  • efficacy assessment: comparing how well each of the tasks above perform on a set aside test set (a set of examples the trained model has never seen)

  • model selection: selecting a model architecture like a classical machine learning SVM or a more powerful but challenging to train deep learning based model

  • model optimization: optimizing model types, parameters and hyper-parameters, in essence, teaching the AI to build the best AI system.

  • transfer learning: bootstrapping the AI from other, larger training example sets beyond what you are gathering for just your problem. For example, learning word and phrase meanings from Wikipedia or large collections of Twitter tweets.

Finally, once models are built and deployed, there is the next step of aggregating insights from individual documents, into a broader understanding based on the overall conversation. At Xyonix, we typically employ a number of techniques like aggregating and tracking mentions across time, or different users, or various slices of the corpus. For example, in one project of ours, we built a system that measures the overall sentiment of other surgeons to a surgeon who has submitted a recent surgery for review. Telling the surgeon that 44% of their reviews expressed negative sentiment is one thing, but telling them that their score is 15% below the mean of peer surgeons is another, more valuable insight. Surgeons didn’t get where they are by being average, let alone below average, so they are more likely to move to correct the specific issues mentioned.

Understanding conversations in depth automatically is a significant endeavor. One key we’ve found to being successful is by looking well beyond just AI model development. Considering what the labels are, how they are structured, how they will be used, how you will improve them, how you will get training examples for the models, how the model’s weaknesses can be improved — and perhaps most importantly, how you will do all of these things over a timeline, with AI models and the product’s using them always improving.

Have a corpus of people communicating with you, each other, or someone else? Having trouble automatically understanding the conversation? Contact us, we’ve taught machines to effectively read all kinds of content for all kinds of customers — we might be able to help.

Drones to Robot Farm Hands, AI Transforms Agriculture

shutterstock_1162991080.jpg

Swarms of drones buzz overhead, while robotic vehicles plod across the landscape. Orbiting satellites capture high-resolution multi-spectral images of the vast scene below. Not a single human  can be seen in the sprawling acres. Today’s agriculture is rapidly revamping into a high-tech enterprise that most 20th-century farmers could hardly recognize. It was only 100 years ago that farming transitioned from animal power to combustion engines. In the last 20 years, the global positioning system (GPS), electronic sensors among other new tools have moved farming even further into a technological wonderland. And now, robots empowered with artificial intelligence can zap weeds with extraordinary precision, while other autonomous machines move with industrious efficiency across farms.

It is no secret that the global population is expected to rise to 9.7 billion by 2050. To meet expected food demand, global agricultural output needs to increase 70%. AI is helping make that goal possible (1). It is clear a change is coming as farms are seeing an 86% decrease in labor force just in the U.S., while the number of farms continue to rise (2). While today’s agricultural technologies and AI capabilities are evolving at a rapid rate, this evolution is just beginning. Factors such as climate change, an increasing population and food security concerns have propelled the industry into seeking more innovative approaches to assure an improving crop yield.

From detecting pests to predicting which crops will deliver the best returns, artificial intelligence can help humanity oppose one of its biggest challenges: feeding an additional 2 billion people by 2050 without harming the planet.

AI is steadily emerging as an essential part of the agricultural industry’s technological evolution including self-driving machinery and flying robots that are able to automatically survey and treat crops. AI is assisting these machines in interacting together so they can begin to frame the future of fully automated agriculture. The purpose of all this high-tech gadgetry is optimization, from both economic and environmental standpoints. The goal is to only apply the optimal amount of any input (water, fertilizer, pesticide, fuel, labor) when and where it’s needed to efficiently produce high crop yields (3).

shutterstock_1160507371 (1).jpg

With AI bringing all components of agriculture together we can discuss how autonomous machines and drones are driving driving the future of agriculture. A future where precision robots and drones will work simultaneously to manage entire farms.

Autonomous machines can replace people performing laborious and endless tasks, such as hand-harvesting vegetables. These robots use sensor technologies, including machine vision that can detect things like the location and size of stalks/leaves to inform their mechanical processes.

In addition, the development of flying robots (drones) gives way to the possibility that most field-crop scouting currently done by humans could be replaced. Many scouting tasks, such as scouting for crop pests, require someone to walk long distances in a field, and turn over plant leaves to see the presence or absence of insects. Researchers are developing technologies to enable such flying robots to scout without human involvement. An example of this is PEAT, a Berlin-based agricultural tech startup; PEAT has developed a deep learning application called Plantix that identifies potential defects and nutrient deficiencies in plants and soil. Analysis is then conducted using machine learning and software algorithms which correlate particular foliage patterns with certain soil defects, plant pests and diseases (4). The image recognition app identifies possible defects through images captured by the user’s smartphone. Users are then provided with soil restoration techniques, tips and other potential solutions with a 95% accuracy.

Another company focused on bringing autonomous AI machinery to agriculture is Trace Genomics which focuses on machine learning for diagnosing soil defects. The California-based company provides soil analysis services to farmers. The system uses machine learning to provide clients with a sense of their soil’s strengths and weaknesses. The system attempts to prevent defective crops and maximize healthy crop production. According to the company’s website,

after submitting a sample of their soil to Trace Genomics, users receive a summary of their soils contents. Services provided in their packages range from a pathogen screening focused on bacteria and fungi to a comprehensive microbial evaluation (5).

These autonomous robots combined with drones will define the future of AI in agriculture while AI and machine learning model are helping ensure the future of crops starting from the root up.

shutterstock_1239558490.jpg

It will take more than an army of robotic tractors to grow and harvest a successful crop. In the next 10 years, the agricultural drone industry will generate 100,000 jobs in the U.S. and $82 billion in economic activity, according to a Bank of America Merrill Lynch Global Research (6).

From spotting leaks to patrolling for pathogens, drones are taking up chores on the farm. While the presence of drones in agriculture dates back to the 1980s for crop dusting in Japan, the farms of the future will rely on machine learning models that guide the drones, satellites, and other airborne devices providing data about their crops on the ground.

As farmers try to adapt to climate change and other factors, drones promise to help make the entire farming enterprise more efficient. For instance, Descartes Labs, is employing machine learning to analyze satellite imagery to forecast soy and corn yields. The New Mexico startup collects 5 terabytes of data every day from multiple satellite constellations, including NASA and the European Space Agency (7). Combined with weather readings and other real-time inputs, Descartes Labs reports it can predict cornfield yields with high accuracy. Its AI platform can even assess crop health from infrared readings.

With the market for drones in agriculture projected to reach $480 million by 2027 (8), companies are also looking to bring drone technology to specific vertical areas of agriculture. VineView, for example, is looking to bring drones to vineyards. The company aims to help farmers improve crop yield and reduce costs (9). A farmer pre-programs a drone’s route and once deployed the drone leverages computer vision to record images which are used for later analysis.

VineView analyzes captured imagery to provide a detailed report on the health of the vineyard, specifically the condition of grapevine leaves. Since grapevine leaves are often telltales for grapevine diseases (such as molds and bacteria), reading the “health” of the leaves is often a good indicator for understanding the health of the plants and their fruit as a whole.

The company declares that its technology can scan 50 acres in 24 minutes and provides data analysis with high accuracy (10). This aerial imaging combined with AI techniques and machine learning platforms are the start of something that is being referred to as “precision agriculture”.

Precision agriculture (PA) is an approach to farm management that uses information technology to certify that crops and soil receive exactly what they need for optimum health and productivity. The goal of PA is to ensure profitability, sustainability and environmental protection. Since insecticide, for example, is only going to exactly where it is needed, environmental runoff is markedly reduced.

Precision agriculture requires three things to be successful: physical tools such as tractors and drones, site-specific information acquired by these machines, and it requires the ability to understand and make decisions based on that site-specific information.

Decision-making is often aided by AI based computer models that mathematically and statistically analyze relationships between variables like soil fertility and crop yield.  Self-driving machinery and flying robots able to automatically survey and treat crops will become commonplace on farms that practice precision agriculture. Other examples of PA involve varying the rate of planting seeds in the field according to soil type and using AI analysis and sensors to identify the presence of weeds, diseases, or insects so that pesticides can be applied only where needed. The Food and Agriculture Organization of the United Nations estimates that 20 to 40 percent of global crop yields are lost each year to pests and diseases, despite the application of millions of tons of pesticides, so finding more productive and sustainable farming methods will benefit billions of people (11).

shutterstock_1196120776.jpg

Deere & Company recently announced it would acquire a startup called Blue River Technology for a reported $305 million. Blue River has developed a “see-and-spray” system that leverages computer vision, a technology we here at Xyonix deploy regularly, to discriminate between crops and weeds. It hits the former with fertilizer and blasts the latter with herbicides with such precision that it is able to eliminate 90 percent of the chemicals used in conventional agriculture. It’s not just farmland that’s getting a helping hand from robots and artificial intelligence. A California company called Abundant Robotics, spun out of the nonprofit research institute SRI International, is developing robots capable of picking apples with vacuum-like arms that suck the fruit straight off the trees in the orchards (12). Iron Ox, out of San Francisco, is developing one-acre urban greenhouses that will be operated by robots and reportedly capable of producing the equivalent of 30 acres of farmland. Powered by artificial intelligence, a team of three robots will run the entire operation of planting, nurturing, and harvesting the crops (13). Vertical farming startup Plenty, also based in San Francisco, uses AI to automate its operations, and got a $200 million vote of confidence from the SoftBank Vision Fund earlier this year. The company claims its system uses only 1 percent of the water consumed in conventional agriculture while producing 350 times as much produce (14). Plenty is part of a new crop of urban-oriented farms, including Bowery Farming and AeroFarms.

Agricultural production has come so far in even the past couple decades that it’s hard to imagine what it will look like in a few more. But the pace of high-tech innovations in agriculture is only expanding.

Don’t be surprised if, 10 years from now, you drive down a rural highway and see small helicopters flying over a field, stopping to descend into the crop, use robotic grippers to manipulate leaves, cameras and machine vision looking for insects, and then rise back above the crop canopy and head toward its next location. All without human being in sight.

So what is in store for the future? Farmers can forecast that in the near future their drones and robots will have the AI capabilities to communicate about everything from crop assessment, counting cattle, monitoring crop diseases, water watching and mechanical pollination.  

Have agriculture data? Multi-spectral aerial imagery? Operational farm data? Need help mining your data with AI to glean insights? CONTACT us -- we might be able to help.

REFERENCES

  1. johndeerejournal.com/2016/03/agricultures-past-present-and-future/

  2. croplife.org/news/agriculture-then-and-now/

  3. theconversation.com/farmers-of-the-future-will-utilize-drones-robots-and-gps-37739

  4. peat.technology

  5. tracegenomics.com

  6. www.idtechex.com/research/reports/agricultural-robots-and-drones-2018-2038-technologies-markets-and-players

  7. www.crunchbase.com/organization/descartes-labs

    prnewswire.com/news-releases/agricultural-robots-and-drones-2017-2027-technologies-markets--players---agricultural-drone-market-to-be-worth-480-million---research-and-markets

  8. www.vineview.ca

  9. www.techemergence.com/ai-agriculture-present-applications-impact/

  10. www.technologyreview.com/s/610549/exclusive-alphabet-x-is-exploring-new-ways-to-use-ai-in-food-production

  11. singularityhub.com/2017/10/30/the-farms-of-the-future-will-run-on-ai-and-robots/#sm.00001idn7rzx17d24xg1212od7vm5

  12. ironox.com

  13. plenty.ag

Helping At Home Healthcare Patients with Artificial Intelligence

shutterstock_1060638626.jpg

Until very recently, a caregiving parent possessed few at home health tools beyond a simple thermometer. Then, as the internet developed, so too did online healthcare sites such as WebMD, offering another very powerful tool — information. At home health tools continue to rapidly undergo massive changes, and now it’s AI leading the way. Today a parent can look inside their child’s ear and receive help treating an ear infection, or an elderly person can conduct their own hearing test without ever leaving the house, and often, with intelligent machines operating behind the scenes. Increasingly smart at home health devices are evolving through the rapid proliferation of AI and the increased embrace of digital medicine. These new tools include devices like smart stethoscopes that automatically detect heartbeat abnormalities or AI powered otoscopes that can look in a person’s ear and detect an infection.

Imagine a world where at home AI healthcare tools get smarter and more able to heal you every day. These tools are incredibly data driven — where they are continuously collecting data off your body, about your environment, your nutrition and activity — and then these algorithms are continuously learning from this data

not just from you, but from millions of other patients and doctors who know how to make sense of this information.

These AI tools will then deliver personalized healthcare tips and remediation throughout your whole life. Perhaps one day without you having to set foot in a brick and mortar hospital.

AI can help wherever the care provider is identifying patterns, for example whenever a physician identifies the acoustic pattern of a heart murmur, the visual pattern of an ear infection image, or the contours and shapes of a carcinogenic skin lesion.

What if AI could help you or a doctor predict a deteriorating heart condition? “If you can go to the hospital and say, ‘I’m about to have a heart attack,’ and you have proof from an FDA-approved product, it is less costly to treat you,” said author and ABI Principal Analyst Pierce Owen (1). Other at home healthcare tools are becoming smarter everyday with tools such as EEG Headbands that can monitor your workout and vitals, Smart Beds and devices such as EarlySense that detect movement in your sleep and give you detailed data driven reports on a variety of vitals and how much sleep-and deep sleep-you are actually getting or smart baby monitors that allow parents to monitor newborn vitals. (2)(3)(4)

One significant way AI at-home healthcare is taking off is by helping parents with young children.

Parents can never get answers quickly enough when something is wrong with their child. So what if they never even had to drive to the doctors office?

According to the National Institute of Deafness and Other Communication Disorders (NIDCD), 5 out of 6 children experience ear infections by the time they are 3 years old. That’s nearly 30 million trips to the doctor’s office a year just for ear infections in the U.S. alone. Additionally, ear infections cost the US Health System 3 billion per year.

shutterstock_214202323.jpg

This is where companies like Cellscope step in. A pioneer in the otoscope industry, Cellscope has had success launching it’s otoscope, Oto Home. Oto Home is a small smartphone peripheral device that slides onto the users iPhone accompanied by an app. Once inside the child’s or patient’s ear the app's software recognition feature called the Eardrum Finder begins to direct the user to move and tilt the scope to capture the visuals a physician will need to attempt a diagnosis. After the session, the user enters some basic information about the patient and both the recording and the information is sent to a remote physician who reviews the data and if necessary can prescribe medication.(5) This same image used by the remote physician, can, be used by an artificial intelligence system to assist the physician with a diagnosis. The use of the AI system can decrease the costs of more expensive tests, in addition to identifying more refined possible diagnoses.

AI in healthcare can now also detect heartbeat abnormalities that the human ear cannot always initially detect. Steth IO captures exactly the premise of what the company’s goal is: “see what you cannot hear”. One study found that doctor’s across three countries could only detect abnormal heart sounds about 20% of the time.(6)

By using thousands of various heartbeat sounds, our Xyonix data scientists trained the Steth IO AI tool to “learn” how to tell which sounds are out of the norm. After the system takes in the encrypted and anonymized heartbeat recordings, it sends back a classification like “normal” or “murmur” to help assist the physician in their diagnosis.

Since patients can see and hear their heart and lung sounds, patient engagement is also a bonus for physicians. Steth IO also differentiates itself from other emerging AI healthcare tools by integrating the bell of the stethoscope directly into the iPhone so there is no need for Bluetooth or pairing and it displays all results in real time (8).

While this is currently only operated by physicians, as the at home healthcare space rapidly grows, we expect to see similar heartbeat abnormality detection abilities tailored for at home use so that you can check the health of you and your loved ones.

shutterstock_727368211.jpg

Virtual AI driven health care systems are also quickly making their way into people’s homes. Take for example HealthTap, which brings quality medical service to people around the world who lack the ability to pay. How it works: patients receive a free consultation via video, voice, or text. Then,

“Dr. A.I.”, their new artificial intelligence powered “physician”, converses with the patient to identify key issues and worries the patients is having. Dr. A.I then uses general information about the patient and applies deep learning algorithms to assess their symptoms and apply clinical expertise

that attempts to direct the user to an appropriate type and scale of care. (9)

Dr. AI isn't the only new AI that can give you healthcare advice from the comfort of your home. CareAngel launched its AI virtual nurse assistant, Angel. Their goal is to reduce hospital readmissions by continuously giving medical advice and reminders between discharges and doctors visit. Healthcare providers can also use angel to check in on patients, support medication adherence and check their patient’s vitals. (10) Ultimately this AI technology strives to significantly reduce the administrative and operational costs of nurse and call center outreach.

In a world where healthcare is meeting resistance from rising costs, we can see that the emergence of innovations in AI and digital health is expected to redefine how people seek care and how physicians operate. The goals and visions of most emerging health companies currently are simple:

allow new suppliers and providers into the healthcare ecosystem, empower the patient and provider using real-time data and connection and take on lowering general and long-term healthcare costs.

While healthcare has always been patient centered, AI is taking patients from a world with episodic in clinic interactions to more regular, on demand and in home care provider / patient interaction.

Trying to make your medical device smarter? Need Help with Your AI Needs? CONTACT us -- we might be able to help.

References

  1. https://homehealthcarenews.com/2018/06/explosion-in-artificial-intelligence-coming-for-home-care-and-hospitals/

  2. https://brainbit.com

  3. https://www.bloomberg.com/research/stocks/private/snapshot.asp?privcapId=22743939

  4. https://www.engadget.com/2017/08/10/nanit-ai-baby-monitor-impressions/

  5. https://www.mobihealthnews.com/38969/cellscopes-iphone-enabled-otoscope-remote-consultation-service-launches-for-ca-parents

  6. https://www.geekwire.com/2018/smartphone-stethoscope-maker-steth-io-launches-ai-assistant-help-doctors-detect-heart-problems/

  7. https://exponential.singularityu.org/medicine/wp-content/uploads/sites/5/2018/11/Steth-IO-uses-AI-to-improve-physicians’-confidence-in-their-diagnoses.pdf

  8. https://www.dr-hempel-network.com/digital-health-technolgy/smartphone-based-digital-stethoscope/

  9. https://hitconsultant.net/2017/01/10/healthtap-launches-doctor-a-i/

  10. https://www.crunchbase.com/organization/care-angel#section-overview

Vannot - Video Annotation Tool for Object Segmentation

A significant challenge in teaching machines to automatically analyze, understand and glean object related insights from video is how to efficiently and accurately prepare large amounts of examples used to train and evaluate models. With frame rates around 30 to 60 fps, accurately labelling objects in even small time spans of video can be extremely time consuming and expensive.

Screen Shot 2018-10-25 at 4.10.50 PM.png

Today, we have the pleasure of introducing you to Vannot—an open source, web based, and easy to integrate video annotation tool we created to help efficiently annotate objects for use in machine learning tasks like video segmentation and imagery quantification. Vannot takes advantage of the relative similarity of nearby frames to enable efficient object annotation in a web context with geographically distributed labelers.

We took inspiration from some of the industry’s most venerable drawing and illustration applications, and reframed them in close consideration of the workflow processes involved with annotating a large amount of video data. It is easy, for example, to advance a few frames or seconds and carry over the most recent shapes and annotations, so that all you have to do each time is make a few small adjustments. More advanced features are available, as well: it's possible to group adjacent or disjoint shapes into the same instance if, for example, an object is composed of many parts or is obscured behind some interloper.


We're interested and excited for you to use Vannot in your own efforts, and hopefully to contribute back to this free and open source project. We've strived to make it very easy to integrate — Vannot is just a webpage: HTML, CSS, and Javascript. You configure it with the URL you use to load the page. More information on using, integrating and developing Vannot can be found on GitHub at github.com/xyonix/vannot.


Have a look at the video below to see Vannot in action. In this video, Vannot designer and sailor Clint Tseng, walks through the preparation of sailing related training data like hull, jib and main sail object segmentation.

10 Ways AI is Doing Good & Improving the World

If you're paying attention to the tech media, you've probably heard a lot of the doomsday prophecies around artificial intelligence. A lot of it is scary, but despite some valid concerns, AI is doing a lot of good.

Medical treatment, reduced traffic jams, faster disaster recovery, and safer communities –  it’s all coming your way, thanks to the tremendous power of neural networks.

Check out this list of pioneering technologies for good.

1. Fighting Deforestation – AI for the Environment

For years we've talked about preserving these precious habitats yet real progress always seemed to be just out of reach. Unsustainable logging and widespread deforestation have devastated pristine natural spaces, and too often, we feel like there's little we can do about it.

Screen Shot 2018-05-22 at 6.53.56 PM.png

New artificial intelligence tools are helping. They are increasingly used to identify vulnerable landscapes, so that environmental programs can direct attention toward preservation.

A San Francisco-based nonprofit Rainforest Connection configures old smartphones to monitor sounds and installs them in the rainforests. The sound data is used to train machine learning algorithms to identify the threatening sounds of a chainsaw. Park rangers are alerted of suspicious activity in real time, helping to stop illegal deforestation.

By quickly identifying signs of deforestation, government and environmental agencies are better informed on forest locations at immediate risk; they can then react, often by adjusting enforcement, regulations and penalties.

2. Accessing our Past

New document surveying technologies leverage artificial intelligence to help us make sense of enormous amounts of data in both historical and government documents. Machine learning and artificial intelligence are revolutionizing the process of curation.

AI document tools, allow historians and curators to take a more ‘hands-off’ approach in assessing large volumes of information, and make artifacts easily accessible to people.

Countless historical handwritten documents sit on library shelves around the globe today, not readily accessible to researchers and academics in all countries. Digitizing these documents is the first step to making these records available. But it is impractical for a person to search through the immense catalog of information. That is, without machine learning.

A Berlin startup, omni:us is training neural networks to generate transcriptions of word images in documents from a collection of over a billion documents digitized from libraries all over the world!

Traditional techniques like individual researchers reading documents are failing to keep up with the exponential increase in new documents. AI systems that read like humans helps give us a big picture of what's in a large corpus, or body of documents. Neural networks are increasingly used to extract high-level information (such as subject matter), and temporal changes like how people, organizations and places interact over time. This extracted information enables effective search, organization and understanding of often billions of records.

3. Easing the Strain of Mental Health

Studies have found that around one fifth of all Americans have some form of mental health problem or need mental health services in any given year. So how do we attack this epidemic and develop meaningful solutions through technology?

shutterstock_1070632916.jpg

Can you get better therapy from a smart robot? Understanding the value of artificial intelligence here involves looking at how these systems function.

Back in the 1960s, the first chatbot named Eliza was developed at MIT. Eliza was developed to act as a “Rogerian psychologist,” taking in conversation and mirroring some of what patients say back to them.

The code for ELIZA was not sophisticated. This artifact from Vintage Computer shows the program written in BASIC - it’s the classic simple chatbot, reading user input, applying some simple rules, and continuing the conversation with a reply to the user. Despite this simplicity, Eliza proved mesmerizing to users.

So if something as simple as ELIZA could engage a user in conversation, how far can new machine learning tools take the conversation?

Now, state-of-the-art chatbots like Andrew Ng’s ‘Woebot’ are offering cognitive behavioral therapy through improved conversational understanding. The chatbot uses natural language processing technology to process what the patient says and prompts them to talk through their feelings and apply coping skills, such as rephrasing a negative statement in a more positive light.


This type of technology may be used in conjunction with seeing mental health professionals; perhaps people too reluctant to see a human therapist will be more open to “seeing” a virtual therapist.

4. Hacking Crop Yields

Artificial intelligence is helping us adapt to our changing world by examining crop yields around the world. Algorithmic crop yield tools can pinpoint crop projections with noteworthy accuracy. In this study from Stanford, we see remote-sensing data run through a convolutional neural network to provide a crop yield map that, when tested, produces excellent results.

AI is used to show us where the land is most fertile, where dangerous conditions might exist, to forecast crop yields, and ultimately tell us where to plant crops. And it’s all contributing to feeding our planet.

5. Automated Harvests

Agriculture is vitally important for our world, and natural foods are important for our health. Maintaining a low cost abundant food supply is essential for feeding humanity during consistent population increases.

Farmers are now using machine learning tools and robotics to help reduce the amount of fruit and vegetables that go to waste in the fields.

We've all heard sad stories of fruit rotting on the vine; grapes, apples and other crops remaining unpicked often due to labor shortages. We rely on foreign labor for much of our harvesting; this may prove unsustainable in the long run.

Screen Shot 2018-05-23 at 9.37.14 AM.png

Agricultural robots like those from HarvestCroo use intelligent computer vision algorithms to automate the picking of fruits and vegetables. Also, consider Blue River’s “See and Spray” technology. See and Spray uses computer vision to provide individualized plant care, doing away with the technique of broadcast spraying chemicals in the crop fields. The new technology avoids spraying the actual crops and reduces the volume of herbicides used by 90%. It is optimizing the application of herbicides, and at the same time tackling the growing problem of weed resistance to herbicides.

Feeding the world's increasing population is a challenge -- a challenge that AI technologies are helping address.

6. AI in Transportation

We all know about autonomous vehicles, but what about traffic management? Artificial intelligence is contributing to many lesser-known advances in the transportation field.

Let's start with smart traffic lights. If you're a municipal planner, you know that traffic lights cost a lot of money to put in place – and you know how important they are for public health and safety, as well as keeping traffic moving.

Public planners see traffic as a kind of “biological” process – much like blood circulation in our bodies, traffic needs to move smoothly for a healthy road network.

Smart traffic lights go a long way toward delivering that overall health and productivity in our daily lives, by reducing traffic congestion and waiting time at intersections, and the resulting pollution. Companies like Surtac produce artificial-intelligence driven adaptive traffic lights that respond to changing traffic conditions by the second. Sitting in traffic jams might someday be a thing of the past.

7. Training and Therapy for the Disabled

AI is also showing promise enhancing the lives of patients with disabilities. For example, robotic technology is helping children overcome some of the traditional limitations that go along with cerebral palsy. MIT News illustrates some of the ground-breaking robotics at work.

Therapy for CP is typically a slow process. A lot of cerebral palsy patients need more therapy than they are getting; they need more hours of training to improve particular muscle movements and range of motion. Therapy is expensive, and a shortage of therapists exacerbates this problem.

The “Darwin” bot, made by scientists at the Georgia Institute of Technology, explores an alternative. The chatbot interacts with patients to help them improve their mobility over time.

Like the modern mental therapy chatbots discussed above, Darwin takes in inputs and doles out praise for positive work. The difference is that here, Darwin’s not looking through a text lexicon to interpret what someone is thinking – the cerebral palsy therapy robot is looking for specific body movements that are indicative of patient progress.

AI holds potential for training and healing our minds and bodies.

Perhaps this is why much AI research is devoted to advancing healthcare, and why so many healthcare professionals are excited.

8. Fighting crime

We've already talked about some of the aspects of “smart cities”. Here’s another that’s on the rise: smart policing.

AI tools can be used to serve as extra eyes and brains for police departments. Law enforcement officers around the country are readily accepting all the technology help they can get.

If you've seen the television show APB, where billionaire Gideon Reeves astounds the local police department with his crime prevention app, you already know a little bit about how this might work.

Companies like Predpol, the “Predictive Policing Company,” offer predictive policing tools with similar goals. Predpol decreases response times, relieves police officers of overtime shifts, and has been shown to actually reduce crime totals in municipalities.

Technologies like Predpol use ‘event data sets’ to train algorithms to predict what areas may need more police coverage in the future. The company stresses that no personal information is used in the process – and, Predpol doesn't use demographic information either.

Scrubbing these systems of demographic input helps to prevent the kind of discrimination and bias that makes people wary of using AI to “judge” people. In fact, typically they work knowing just the location, type and time of past crimes.

Predictive policing is just one aspect of public administration, among many others, that benefits from an AI approach.

9. Improving education

It’s evident that education has changed significantly over the past 30 years. Education has moved from lecture-focused to interactive hands-on learning experiences, and from the use of physical to digital documents and interactive software programs. There’s been a change from a few monolithic teaching modalities to a vast world of innovative learning opportunities.

shutterstock_704499727.jpg

Artificial intelligence is helping driving this change.

Consider Brainly, a platform termed as the world's ‘largest social learning’ community that connects millions of students from 35 countries by facilitating peer-to-peer learning. The on-demand educational value of the platform is driven by algorithms that sort through a mass of data, filter content, and present it where it’s most useful.

Also, check out how Thinkser Math is using groundbreaking AI to personalize math education. Tools like these are available for use in the classroom and at home.

10. Disaster Recovery

ML/AI advances provide insights into resource needs, predict where and what the next disaster might be, ultimately providing more effective damage control.

Consider this article from Becoming Human where you can see “disaster recovery robots working overtime,” a combination of surveillance drones and mobile rescue robots helping with the aftermath of forest fires, earthquakes and other natural disasters.

In addition, companies like Unitrends are pioneering systems that can help tell whether an event is really a disaster, or not. New AI technology can evaluate something like a power outage or downtime event to see whether it “looks like” a crisis or is just a fluke.

All of this can prove critically important when it comes to saving lives and minimizing the tragic damage that storms and other natural disasters cause.

More to Come

All of this shows that AI is indeed benefiting our world. The down sides of Terminator like nightmares are mostly hypothetical, but the upsides are already a reality.

Need help with an AI project that makes the world better? CONTACT us, we might be able to help.