9 Revolutionary applications of AI that will make you re-think Machine Learning

In the past few years, the popularity of all things AI has really taken off. This past December, The Conference on Neural Information Processing Systems – one of the most popular AI conferences – was sold out within 11 minutes. Quite a difference from two years prior when the conference sold out after 2 weeks. On Twitter, people started comparing AI conferences to a Beyonce concert!  

This points at a general trend of the field growing in the number of researchers but also in the number of industry practitioners and the general public interested in what is going on the in the world of AI and machine learning. As another point of reference, the AAAI Conference (Association for the Advancement of Artificial Intelligence), one of the oldest and most influential AI conferences, received a record number of submissions this year. With 7095 submissions, the numbers were nearly doubled from last year. Overall, 1150 papers were accepted, which is not a great increase from previous years, meaning that only about 16% of the submissions were accepted. 

This year at AAAI, which also happened to be in beautiful Hawaii, BotXO had a chance to present some of the research we are doing in the field of dialogue systems. And had a great discussion with other researchers about how to tackle problems that are of scientific interest to many researchers but also of great interest to industry practitioners. AAAI –collocated with IAAI (Innovative Applications of AI) and EAAI (Educational Advances in AI) was a great place to exchange ideas and learn about cutting edge research not only about dialogue though. There were many applications that deal with robotics, dialogue systems, and assistive technologies, however, there was also a great deal of research using machine learning for the social sciences, medical purposes, among many others. Below we describe some of the work that stood out to us! As a disclaimer, we must say that with 1095 papers, there were many interesting topics and we had a hard time picking just a few! 

PhoneMD: Learning to Diagnose Parkinson’s Disease from Smartphone Data 

Patrick Schwab, Walter Karlen 

Parkinson’s disease is the second most common age-related neurodegenerative disease, following Alzheimer’s disease.  According to the Parkinson’s Foundation, in the United States, healthcare costs related to this disease total about 25 billion dollars per year alone. Quality of life for Parkinson’s patients relies heavily on how early treatment starts, therefore requiring a correct diagnosis. The authors of this paper are motivated by this and present a machine learning approach to tackling the misdiagnosis of Parkinson’s disease. They collect smartphone data on 1853 participants with and without Parkinson’s disease. Specifically, they collect signals from walking, voice, tapping and memory tests using a smartphone application. As previously mentioned, Parkinson’s is a progressive neurodegenerative disease, therefore, they use the temporal nature of the data in order to detect the disease. In addition, once someone is diagnosed using this method, the authors use a hierarchical neural attention model in order to identify which of the 4 tests was more significant in the prediction, in order to help clinicians assess whether this was a valid assessment. 

Ensemble Machine Learning for Estimating Fetal Weight at Varying Gestational Age 

Yu Lu, Xi Zhang, Xianghua Fu, Fangxiong Chen, Kelvin K. L. Wong

Intrauterine growth restriction, which means that an unborn baby is not growing at normal rates, can put the babies at risk throughout pregnancy, delivery and after birth. Not growing at a normal rate can lead to low resistance to infections, troubles maintaining body temperature or even dead during delivery. The authors are motivated by the impact that machine learning can have in order to help obstetricians to better estimate the weight of a fetus. This is not meant to replace traditional clinical practices but to work in collaboration for an improved prediction, which can reduce prenatal morbidity and mortality. The approach used here uses data from intrapartum recordings and uses simple machine learning algorithms combined in an ensemble model. Overall they show that the accuracy of their predictions is improved by a significant 12%.  

Feature Isolation for Hypothesis Testing in Retinal Imaging: An Ischemic Stroke Prediction Case Study 

Gilbert Lim, Zhan Wei Lim, Dejiang Xu, Daniel S.W. Ting, Tien Yin Wong, Mong Li Lee, Wynne Hsu 

An ischemic stroke happens when arteries that pump blood to the brain are blocked. This can occur by the formation of a blood clot within the brain, commonly known as a thrombotic stroke. According to the Stroke Center organization, about 88 percent of all strokes are Ischemic strokes. This is also a disease that tends to affect more women than men, however, it doesn’t discriminate according to age.  The research presented here aims to exploit the knowledge in the medical community when it comes to predicting the risk of ischemic stroke in individuals. They focus on retinal blood vessels and the features that correlate with cerebral blood circulation. They explore ways of isolating important features from retinal images and attempt to use deep neural networks in order to predict ischemic strokes. In general, this proved to be a difficult task as it became very difficult to generalize to new images, however, this gives light to a task worth exploring further. 

Crash to Not Crash: Learn to Identify Dangerous Vehicles Using a Simulator 

Hoon Kim, Kangwook Lee, Gyeongjo Hwang, Changho Suh

The Insurance Institute for Highway Safety reports a cost of about 37,133 lives due to car crashes in the U.S. in 2017. This number reaches millions when looking at global statistics. One way of preventing many of these cases is through warning systems that allow telling the driver whether there is a danger of a collision. However, in order to develop a system that can do this, a great amount of data needs to be collected. In the real world this is very difficult and annotating the data accurately is also a challenge in itself. The authors point to this fact as motivation and take on the challenge of collecting a large amount of collision data by developing a synthetic data generator on top of a driving simulator. Specifically, they manipulate internal functions of a game called Grand Theft Auto (which many of us have played!)  in order to create accident versus non-accident scenes. They generally find that labels created through simulation are very noisy however, they are still able to reduce missed detection by 18% than those trained with real-world data. 
 

Cooperative Multimodal Approach to Depression Detection in Twitter  

Tao Gui, Liang Zhu, Qi Zhang, Minlong Peng, Xu Zhou, Keyu Ding, Zhigang Chen

Although detection of depression through deep learning and Twitter data is not a new idea, it remains a complex task that still needs solving. According to the World Health Organization, about 20% of children and teenagers experience mental illnesses. In addition, statistics reported by the American Academy of child and adolescent psychiatry, show that suicide is the second leading cause of death for individuals aged 5-24 in the United States.  With the rise of social media, we have often heard that there is a causal link between social media use and depression in loneliness. These feelings of depression can be expressed in complex ways through posts, images, comments, etc. The authors point to the fact that sometimes words can be deceiving, and that using a combination of text and visual cues might be the best way to understand what someone is really saying. In this study, they use text and images from users posts on tweeter in order to detect depression. More specifically, the employ a reinforcement learning method in which the system learns to select text and images as indicator posts that lead to a greater reward, or greater accuracy of detection. This model, of course, assumes that the diagnosis of the user is already known, which is not typically the case, however, through their method, they find that they are able to detect important features that are most indicative of depression which can be useful for future research in this direction. 

 

Predicting Hurricane Trajectories using a Recurrent Neural Network 

Sheila Alemany, Jonathan Beltran, Adrian Perez, Sam Ganzfried

According to the National Hurrican Center in The United States,  Hurrican Katrina, which occurred in 2005, has been the costliest in history in terms of economic damage with about 125,000,000,000 USD of damage, however, it also showed to be a very deadly natural disaster totaling close to 2000 deaths. Fresher in our minds is Hurricane Maria, which hit Puerto Rico and the U.S. Virgin Islands in September 2017 and cost nearly 5,000 lives. Although ways of estimating the trajectory of hurricanes and cyclones exist and are commonly employed to prevent these situations from happening, the methods used have a lot of room for improvement.  This study is motivated by the disastrous impact that hurricanes can have on human lives and economic stability of the region affected and how the rise of deep learning can have a tremendous positive impact. Although weather forecasting using deep neural networks is not new, the methods proposed here are novel since they teach a model to learn the trajectory of hurricanes from one grid to the next. The authors are able to predict the next location of a hurricane 6 hours prior and their proposed model is able to predict hurricanes of any type, as opposed to other methods that tend to only work with the assumption that storms cannot turn back on themselves. Overall, this investigation introduces an interesting approach to a problem,  that when tackled,  can effectively save many lives and reduce the economic damage. 

Multi-GCN: Graph Convolutional Networks for Multi-View Networks, with Applications to Global Poverty 

Muhammad Raza Khan, Joshua E. Blumenstock

According to Our World in Data, which bases their estimates on household surveys, in 1981 44% of the world lived in extreme poverty. These numbers dropped below 10% by 2015. Although Global poverty rates have decreased and are at an all-time low, extreme poverty is still a problem worth tackling as it leaves the ones affected at an incredible disadvantage. In this study, the authors aim at exploring the complicated nature of human behavior, specifically the extensive use of mobile phones, in order to better pinpoint extreme poverty in developing countries. The authors focus on three tasks: 1) predicting poverty, 2) predicting product adoption (the rate to which individuals in developing countries will adopt different mobile financial services) and 3) predicting gender from mobile phone data, as an indicator of inequality. The method proposed uses multi-view graph convolutional neural networks, which exploits the fact that individuals can be connected in more than one way. Overall, they find that considering a multi-view nature of social networks and interactions, prediction of poverty can be improved.

Allocating Interventions Based on Predicted Outcomes: A Case Study on Homelessness Services 

Amanda Kube, Sanmay Das, Patrick J. Fowler

According to the National Alliance to End Homelessness in the United States, in January 2017, 553742 people were identified as being homeless.  Across the U.S. there are community programs that try to tackle homelessness by providing emergency shelters, transitional housing, food services and permanent housing solutions. These depend on the number of resources available or collected. This investigation is motivated by the lack of resources available for social services in many of these communities. Specifically, this paper focuses on the optimal ways of allocating different types of social services in the context of homelessness. The type of services considered range from time-limited, nonresidential support, to rental assistance among others.  The authors use existing records across homeless services in the U.S. which include different service allocation mechanisms and information about families who have requested assistance and build a counterfactual model in order to predict whether a household would reenter homelessness when receiving different services. This aims at finding the best way to allocate services in order to reduce the rate of homelessness in the long run. 

Generating Live Soccer-Match Commentary from Play Data 

Yasufumi Taniguchi, Yukun Feng, Hiroya Takamura, Manabu Okumura 

The authors propose a method for generating live football-match commentaries. The authors generate templates using placeholders instead of player names and team names and implement a gate mechanism in order to select the important plays. This investigation, however, assumes that play data is always available, an assumption that the authors clearly state. It also assumes that the players that will be mentioned throughout the commentary are given beforehand. Given the unrealistic assumptions, this paper proposes incorporating visual cues as part of future work, however, it is still a fun and entertaining experiment showcasing the use of natural language processing techniques and neural networks for more creative purposes. 

Overall, this year’s AAAI saw a great variety of research topics.  As one can see from the collection of topics presented here, there are many applications of machine learning that we often neglect to consider on a day to day basis, however, it has become clear that despite the common debate of how Artificial Intelligence will negatively affect our lives, AI actually has the potential of having a tremendously positive impact. These are just some examples however if you would like to explore other research topics presented check out the AAAI 2019 website! 

If you are interested in hearing more about how to get started, please contact us at  hello@botxo.co – or give us a ring if you need more customer cases +45 26 71 58 45 .

Want to know more about Chatbots – Book a Demo!

 

 


Designed by Iliknur Hyusnyueva.