Racist photo-tagging algorithms and white suprematist twitter bots: when AI goes wrong.

Racist photo-tagging algorithms and white suprematist twitter bots: when AI goes wrong.

AI is a buzzword that is used continuously nowadays. And though it is true that AI has launched us into an entire new era – we sometimes don’t consider the terrible way it can go wrong. An example would be Tay. She was a new artificial intelligence Twitter bot. She learned through the conversations users had with her and lasted a day before the internet had corrupted her. She uttered a series of discriminatory statements such as: ‘Hitler was right I hate the jews’, denied the holocaust and became a white supremacists. (Source)

This happened because an AI such as Tay works like a parrot: she repeats what people have said to her. Of course, as soon as the trolls of the internet had figured this out it became their sole goal to corrupt this AI teenage girl. Microsoft quickly shut her down, but there were many accusations that the company should have seen this coming and that they should have taken actions to prevent it.

Another example of AI gone horribly wrong is in Google’s photo app, which  automatically assigns labels to pictures.  There was a small uproar in 2015 when the app incorrectly tagged two people as ‘gorillas’. (Source).

Why does this happen?

This happens because a good AI program needs to be taught. This is done with lots and lots of material. Creating this learning material for the robot by hand is an extremely laborious job and it is much easier to use the volumes of freely available learning material created by people on the internet. The tagging of pictures has been particularly useful as learning material for image-recognition AI.

 

Even when people aren’t purposefully corrupting the algorithms the material isn’t perfect. Many of this material on the web is tainted by racism or other forms of discrimination which  is then built into the algorithm. Emiel van Miltenburg did a study on this phenomena and found examples such as many photos of Asian people incorrectly being labeled as Chinese. Another example is that white babies are usually just given the tag ‘baby’ – while the tags of photos of black babies include the word ‘black’ – signifying black babies as the exception to the norm.

How can we possibly expect our algorithm-children to behave properly if the human race can’t even behave themselves? (Source)

The input problem

But at the very least we have control over the input that goes into these algorithms. How this input is processed to come to a solution is a thornier problem. Especially deep learning algorithms process the input data in such a complex way that is almost impossible to figure out why an algorithm makes certain decisions.  This can be a serious problem when people are hurt by an algorithm – and it makes it hard to know where we can understand deep learning. Who is at fault when a self-driving car turns left instead of right and crashes into a brick wall? (Source)

This screenshot was captured by business insider. (Source)

Legal solutions: right to explanation:

The EU General Data Protection Regulation is a law which comes into effect next year (2018). This law includes a ‘right to explanation’ about decisions made solely by algorithms. However, certain experts find this law to be much too strict and fear it will restrict research into AI. The current reality is that many of the AI-algorithms created by deep-learning are too complex for us to backtrack why a certain decision was reached.

Keeping a human in the loop could be a legal solution – that way the decision is not based only on algorithmic information. However, In very fast decision making systems such as self-driving cars this is simply not possible. On top of that humans are far from perfect either (which is the initial reason why we turned to computers). By including a human in the process we might accidentally include unwanted elements such as racism or other forms of discrimination. This also brings up the question why we have deemed it necessary to have a ‘right to explanation’ for algorithms, yet when a decision is made by humans they often aren’t required by law to explain their decision reaching. (Source 1) (Source 2)

What can we do?

Of course, the difficulty of understanding decisions taken by AI has been recognized by the scientific community as a big problem in the future of AI. Several approaches have been tried to make it easier for us humans to understand the thinking of AI. One of these approaches is to create visualizing tools. Visualization has always been a useful method to understand complex data.

Picture from a scientific paper about visualization of deep networks. In this case it’s about recognizing bell peppers. (Source)

Another solution is to drastically change the way we create AI. Instead of doing the work of translating the computer language to human-comprehensible decisions, we can just let the algorithm explain itself. This is how researchers from University of California, Berkeley, and the Max Planck Institute for Informatics approached the problem. They created an algorithm that can recognize several activities, and explain in words as well as point in the picture to the reasons for his answer.

A study done by researchers created an algorithm that explains itself. (Source)

Whether we will use one of the above methods or something completely different to deal with this problem doesn’t matter. What does matter is that we realize the shortcomings of using machine learning to solve our problems. While it  is an incredibly powerful tool, we simply cannot guarantee that it always work flawlessly. The use of an algorithm instead of a person should always be carefully considered, taking the risks of mistakes into accounts.

On the other hand – humans makes mistakes too and we’re all used to this. So even if machine learning algorithms make mistakes, we shouldn’t be held back by the nontransparency of these algorithms if they could improve our lives.

 

Leave a Comment

Your email address will not be published. Required fields are marked *