Machine Learning has been growing at an incredible pace in 2017. We’ve witnessed many breakthroughs such as AlphaGo beating the best world’s Go players, the comeback of evolution algorithms meta-learning, or WaveNet generating speech that mimics any human voice. Deep Learning research has been skyrocketing since 2012, and it’s not going to stop. Carlos Perez – Co-Founder of Intuition Machine – predicts that the number of research papers related to Deep Learning will triple or even quadruple in 2018!

 

The beginning of the third quarter of 2018 seems like a good time to take a closer look at the top Machine Learning trends in 2018.

 

Generative Models

One of the core goals for AI is to understand the world as a human does. In order to understand the world, one is required to build meaningful representations of abstract concepts. In particular, generative models are very good at this task. There’s nice intuition behind this taken from Richard Feynman who said:

 

What I cannot create, I do not understand  

 

Since their introduction in 2014, Generative Adversarial Networks (GANs) are probably the hottest topic in unsupervised learning research, along with Variational AutoEncoders. A GAN is a generative machine learning model that consists of two neural networks: a generator and a discriminator. During training, the generator tries to generate realistic samples, while the discriminator needs to determine whether they’re fake or real. As the training proceeds, both the generator and the discriminator get better at their tasks. In the end, the generator is capable of generating data that looks like the real thing. GANs have a lot of interesting applications, such as an image to image translation, or improving the quality of low-resolution CT scans.

 

The classic problem with GANs is that they are really good at generating synthetic images in a bunch of different domains, but they struggle to model aspects of images that require the understanding of the whole, like getting the number of animal legs right.

 

Recently the father of GANs, Ian Goodfellow, has published a work on Self-Attention Generative Adversarial Networks (SAGAN). Introducing a self-attention module to the generator resulted in generating images in which fine details at every location are carefully coordinated with fine details in distant parts of the image.

 

There are already almost 4000 papers on Google scholar related to GANs this year, and there’s much more to come!

Interpretability

Many machine learning models, for example, Deep Neural Networks, are considered to be black boxes. However, the typical end-user of machine learning based system prefers solutions that are interpretable and understandable. For example, you may ask yourself if you would trust a neural network diagnosing cancer without explaining its reasoning. Would you choose an AI doctor with 85% accuracy that can explain its reasoning or an AI doctor with 90% accuracy that cannot?

 

Interpretability of machine learning models seems to be crucial for applications in healthcare or law. Last but not least, machine learning engineers benefit a lot from interpretable models, as it’s much easier to validate and improve them.

 

Again, we can borrow an intuition from Richard Feynman who said:

 

If you can’t explain something in simple terms, you don’t understand it

 

We can notice an increasing amount of research in the area of explaining machine learning models. One may distinguish model-agnostic methods and model-specific methods for explaining machine learning models. For example, LIME is a simple technique that uses local linear approximation via K-Lasso regression to provide model-agnostic explanations. You can find a Python implementation of LIME on GitHub. There are a lot of techniques for producing visual explanations for Convolutional Neural Networks, such as saliency maps. If you’re interested, I strongly recommend this compiled list of resources on interpretability by Michał Łopuszyński.

 

Edge Computing

The Internet of Things is going to revolutionize our world. All of these devices, such as mobile phones, beacons or Raspberry Pis, are able to provide tons of data. However, IoT alone is not smart technology. On the other hand, machine learning is fueled by data. These incentives are crucial for machine learning meeting edge computing.

 

There are several advantages for running machine learning applications on edge devices. Firstly, we can highly reduce latency when communication with the cloud is not required. It’s possible because of several reasons, such as hardware improvements, software improvements, and last but not least – compact deep learning architectures without losing a lot on accuracy (MobileNet v2). For example, MobileNet’s forward pass only takes about 20 milliseconds on the newest iPhones.

One cannot omit the importance of the existence and development of frameworks such as CoreML and TensorflowLite. Currently, it’s very easy to implement, train and evaluate a model in one of Python’s Deep Learning libraries and latter deploy it on an Android device or an iPhone.

 

Privacy has always been an important concern on the Internet, and it seems that this year this is truer than ever. Machine Learning algorithms cannot be trained without data. However, if you train a machine learning model on an edge device, your data doesn’t need to leave it.

 

And last but not least, since Deep Gradient Compression has been introduced at ICLR 2018, distributed training on many edge devices seems to be possible on 1Gbps commodity Ethernet.

Guest post by Mateusz Opala, Machine Learning Tech Lead at Netguru

Posted by Miley

Leave a reply

Your email address will not be published. Required fields are marked *