Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
#MakeItSee

AI's that learned what they should not have

July 27, 2020

Sometimes, working with Artificial Intelligence (AI) might produce funny or unwanted results when it does not learn properly, or it learns differently from what we expected. This behavior is due to a diverse variety of possibilities, but some common key points are the reason for malfunctioning AI.

The main and one of the most essential points is data. Which type of data we use, how we label it, and how we preprocess it are tasks that significantly affect the learning of an AI. Selecting the wrong data, tag it wrongly or use unfiltered data are standard errors. That's why we are here, engineers who try to figure out how to fix and also figure out how to break AI. So, let's jump straight to what AI learned that they should have not.

Exploiting Neural Networks

Convolutional Neural Networks, CNN abbreviated, are part of everyday solutions in AI algorithms for computer vision. These neural networks got famous after their breakthrough for image classification tasks, outperforming by a considerable gap previous algorithms from the edge technology. These kinds of Neural Networks have become a powerful tool to recognize images to a human level.

However, CNN seemed like a black box for most of us. You did not know which image representation each neuron learned, but when you knew for sure that if you feed it with data, you magically got results, and your images were classified. That raised the following question in 2015 from the three researchers Anh Nguyen, Jason Yosinki, Jeff Clune:

What differences remain between how deep neural network and humans see the world.

And the answer is quite a lot.

In 2015, researchers invented a method to generate synthetic images that were unrecognizable for humans but where the network labeled it as known objects with high confidence.

Example of synthetically generated images labeled as known objects by the network with high confidence.

This is a first step to learn better how AI thinks and an excellent way to prove that although Neural Networks are an extrapolation of the human brain, both of them learn in different ways.

Face ID

In 2017, Apple released the new iPhone X in which a new functionality was added to substitute the Touch ID from previous phones. This feature was called Face ID. It can scan and recognize your face by only looking at your phone, and you will be able to log in automatically in it.

This AI was innovative due to the new integrated sensors within the phone to see in 3D. This is done via an infrared camera that projects an invisible pattern of dots which the AI takes advantage of it and learns from it.

Although face recognition AI solutions did not have a good reputation, mainly due to the algorithms being easy to trick, Face ID and machine learning made a stable solution pretty solid. However, AI needs lots of data to be ready to handle unseen images. It is like when you know someone's name, but you can not recall the face. The same happened with Face ID. As this feature only learns a bit about your face, people reported that when they were wearing glasses or hats, the AI would not recognize them!  Of course, as more data we give to the phone, the more accurate it will get.

But this is not everything. After the release of the iPhone X, researchers from Bkav posted a video where they were able to fool the Face ID with a printed 3D mask. Nevertheless, interesting.

Microsoft's AI Chatbot

Tay was the AI Chatbot released by Microsoft in 2016. It was an experiment described as a conversational understanding, where people from Twitter could interact with the AI. The more they talked to Tay, the more it would learn and the smarter it would get. It was able to learn by engaging with real people. A pretty good idea, right?

Well, it depends on which data is used. Twitter was able to shift the AI behavior with incendiary comments, which made Tay start tweeting racist and misogynistic remarks.

Funny tweet where Tay started shifting into a toxic behavior.

As Microsoft did not use any type of filter on its AI, Tay was able to learn from all these tweets. That's why these experiments raised new questions on what we should teach AI. Shall we teach them the worst and the best from the human race, or shall we censor in some way the information?

In any case, Tay was deactivated after 16 hours and helped Microsoft to understand and provide useful approaches to improve AI.

By Guillem Delgado