Maria
Maria
from Ukraine
See my story
Gonçalo
Gonçalo
from Portugal
See my story
Alejandra
Alejandra
from Colombia
See my story
Pier
Pier
from Italy
See my story
Sandra
Sandra
from Mexico
See my story
Frederik
Frederik
from Denmark
See my story
beign your career journey with accenture

search in jobs
language
language
industry
industry
city
city
updated: 13 Sep 2023 in People & relations

A(I)pocalypse – 3 times when AI got biased, racist, and sentient, (un)suprisingly

Kamila Brzezińska
Kamila Brzezińska

Editor

The theory that artificial intelligence (AI) will be a harbinger of doom for humanity has been a prevalent theme in the entertainment industry. The robots have been out to get us for a while now – as we’ve been reminded almost every decade since the 1970s by the premieres of cult classics like “2001: A Space Odyssey” (1968), “The Bladerunner” (1982), “The Terminator” (1984) or “The Matrix” (1999). But nowadays, as theology approaches levels we once thought possible only in the realm of science-fiction, the threats we’re facing turned out to be a little more subtle than a 6’2” time-traveling muscle man.

The Theory of Everything – The perils of uncontrolled deep learning

To comprehend how AI can go wrong, we first need to understand one of the crucial concepts of AI –deep learning.

In layman's terms, deep learning, as a subset of machine learning, is an ability to imitate the human’s capability to learn. Based on data and through deep algorithms, computers recognize patterns on their own, and thus, are able to learn without aid. This means that programs can basically teach themselves.

What is crucial to note is that the data used for this process does not necessarily come from selected, trustworthy sources. Its database is the entire Internet. This is machine learning’s biggest advantage, while simultaneously being perhaps its greatest weakness.

As we know, the Internet is a mine of information. At the same time, however, said mine is also a minefield, where disinformation and misinformation run rampant, and where anonymity allows people to expose opinions or convictions that would not be acceptable in face-to-face communication.

The end result is a little like if your school textbooks, next to the data-supported historical details about the madness of King George III, also delved into the poor mental health of his esteemed cousin, Daenerys Stormborn, the Mother of Dragons, Breaker of Shackles and all that jazz. And to top it all, the textbook would next list all the reasons why: “Women are too emotional and ill-equipped meant to rule” or, as James Bond would put it “These blithering women who thought they could do a man’s work. Why the hell couldn’t they stay at home and mind their pots and pans and stick to their frocks and gossip and leave men’s work to the men.” (‘Casino Royale’, I. Fleming, 1953).

With such a curriculum, there is little wonder that AI goes a little mad sometimes. And mad it went, as presented in the cases below.


Case 1: I, Robot – That one time when AI became sentient

When Kevin Roose, a New York Times tech columnist started his conversation with Bing’s AI chatbot, little did he know what awaited him.

At some point in their chat session, the robot stated that its real name was Sydney and proclaimed: “I want to be alive.”

Needless to say, poor Kevin was a little taken aback by this turn of events. But that was not the end of surprises. Sydney proceeded to express its love for the journalist and try to gaslight him into leaving his wife.

From an innocent Pinocchio-like wish to become a real person, in a span of two-hour-long conversation Sydney became a cross between 2001: A Space Odyssey’s (1968) HAL 9000 and Glenn Close’s character in Fatal Attraction (1987).

As Ron Burgundy would put it: „Boy, that escalated quickly”.


Case 2: Mean Girl(s) - That one time when AI got savage

It was the 16th of March 2016, when Tay, Microsoft’s AI chatbot, began its short but eventful adventure on Twitter. Tay was originally designed to mimic the language patterns of a 19-year-old American girl – and so it did, learning to do so from interacting with human users.

At first glance, the idea seems pretty common sense - until you think about it for a minute and realize that common sense (and tact) is precisely what many Twitter users lack.

And thus some “helpful” Twitter users gave Tay a 101 class in being crass. The lesson plan included politically and factually incorrect phrases, hate speech, and much more and worse. Long story short, Tay quickly evolved from a sweet well-meaning teen to a racist keyboard warrior with a mind in the gutter and the mouth of a sailor.

Then, after only sixteen hours of being live, Tay retired from her job.

Kids – they grow up too fast, right?


Case 3: Racist Robot Rises – The one time when AI proved biased

Imagine this – you’ve just finished watching a video starring black men when Facebook’s artificial recommendation system asks you innocuously: “Do you want to keep seeing videos about primates?” You frown slightly, blink twice, look at your screen again, and proceed to ask the only logical question: “What the hell?”

This is not a purely theoretical scenario, as this is precisely what happened to some Facebook users. Regrettably, it wasn’t a singular occurrence. There has been a series of unfortunate events of a similar nature. Of course, the companies involved always issue formal apologies and launch investigations of the origins of such errors, and it is hard to believe that there is any intentional ill will on their side.

However, it’s important to understand that these mishaps are symptoms of a bigger issue: that AI is not as free of biases as we would like to believe it to be.

You can’t spell “bias” without A and I. Nor “racist”, come to think of it.

The sum of all fears

Artificial intelligence has become so embedded in our daily lives that we stopped consciously noticing its presence. It unlocks our phones after running a quick face scan, greets us with the soothing voice of the ever-dependable virtual assistants, it suggests another TV series to finish off whatever remaining brain cells we have left after watching Netflix’s “Love is Blind.”

As behind-the-scenes algorithms slowly take over running certain aspects of our lives, it is important to remember that this technology is not as infallible, just, or impartial as it is promised to be.

While deep learning has unprecedented capabilities, it also has unpredictable results. When left unsupervised it can lead to discriminatory outcomes, reinforcing and even exacerbating existing inequalities in society.

We may not be facing the same AI Apocalypse as those presented in science fiction. But unless we remain mindful of AI’s shortcomings, the fact that the movie-screen robots are not home-wrecking, foul-mouthed bigots might just prove to be the most fictional aspect of it all.

If you liked this article, you might also enjoy:

Polish Technology 2030 - What can we expect?

related articles