Artificial Intelligence: it's dumb and dumber.

Artificial intelligence (AI for short) is hot thanks to recent advancements in game playing algorithms. Google's Alpha Go's success sent the media into a tizzy, leading to debates on the future of AI and the role it will play in the lives of humans as it becomes more ubiquitous. A lot of these debates revolve around the dangers of AI, how it could go Hal 9000 on us, committing crimes against humanity like the robot in I, Robot. Or worse, it could become Skynet and lead to the destruction of human civilization. These fears are literally (and literarily) based in fiction. The reality is equally troubling, not because machines could become sentient and harm us, but because machines are still just so darn stupid.

A most perfect computer... Or is it? 2001: “A Space Oddessey” (1968) Metro-Goldwyn-Mayer
A most perfect computer... Or is it? 2001: “A Space Oddessey” (1968) Metro-Goldwyn-Mayer

The analytical tools we over enthusiastically refer to as AI are known in computer science as machine learning algorithms. They have a strong basis in mathematics and in particular linear algebra, statistics, and a fascinating field known as information theory. A lot of this is based on mathematical concepts that humans developed decades, or even centuries, ago.

Machine learning algorithms also have a basis in artificial neural networks which are models loosely patterned on the way human neurons work. Figuratively, you can think of a neural network working like a spider's web in the rain: droplets collect on the interconnected points and slide across the many slick pathways that the spider carefully engineered to catch its prey. The connections between various intersections in the spider's web (neural network) are called nodes.

The algorithm then decides which connections are more important than others. This determines how data shared between the nodes should be used to understand the data (the droplets in our analogy) passing through the network. Neural networks explore a training dataset and once they've figured that out, they can look at fresh data and make interesting predictions. Commonly, neural networks are used in speech recognition, language translations, and image classification. When Facebook identifies your friends in your pictures for you, that's the result of machine learning. Modern machine learning techniques, like deep learning, often involve many layers of neural networks.

There are a few problems with how this works in practice. First, the data has to be cleaned so that the computer can easily process it, not an easy task on its own. Second, machine learning (or AI) requires humungous datasets, on the order of tens of millions of files, to make inferences that are logical. Google's deep learning algorithms require supercomputers to do the job and petabytes of data to train their most advanced technology. Third, developers and companies can introduce bias into the results. This final point is what makes artificial intelligence potentially harmful.

In 2015, a developer discovered that Google's image classifier was labeling people of African descent as gorillas. It's unclear how this happened, we know that humans decided what should be included into the training set and it's possible that the data didn't have a diversity of faces. It's also possible that there was a problem with the algorithm. Google's statement regarding the issue doesn't make this clear, but what is truly illuminating about this AI fiasco is the company's admission that machine learning has its limits:

"Google's caution around images of gorillas illustrates a shortcoming of existing machine-learning technology. With enough data and computing power, software can be trained to categorize images or transcribe speech to a high level of accuracy. But it can't easily go beyond the experience of that training. And even the very best algorithms lack the ability to use common sense, or abstract concepts, to refine their interpretation of the world as humans do."

The solution, by the way, was to completely censor the use of "gorilla", "chimpanzee", and "chimp" as labels in the classifier. It's a dumb algorithm that intentionally or not, promoted racist beliefs about a group of people. This was a dangerous algorithm, and it caused harm. Not because it was an intelligent machine that became sentient and started killing people, but because it's such an easy–to–influence computer with no ability to distinguish right from wrong.

Here's another example and not surprisingly, it's another Google product (AI is their "thing", after all). A former YouTube AI developer created a program that tracks the trail of auto plays that the website's recommendation algorithm generates. Anyone who has used YouTube is at least passively familiar with this form of AI. Unless you disable the setting, the machine learning algorithm will automatically play videos it recommends based on the video you just finished watching.

Machine learning algorithms can be optimized to look for specific features in the data its analyzing. In other words, the company is training the algorithm to select for certain features over others. In this case, the YouTube recommender seemed to be optimized to select for clickbait videos, though the company denies this. This isn't surprising given that clickbait maximizes revenue since it's a type of content that preys on the impulsivity of the human mind. A lot of the videos that the algorithm drives to viewers are fake news stories, disturbing videos involving children, conspiracy theories and other bizarre and discomfiting videos that make one question whether we're doing OK as a civilization.

Alex Jones reached peak fake news when he promoted the totally false pizza pedophile story during the 2016 election.
Alex Jones reached peak fake news when he promoted the totally false pizza pedophile story during the 2016 election.
In particular, this aforementioned developer examined the trail of auto play videos that were collected during the 2016 election. Of the 1000 videos that were collected, one-third were irrelevant or neutral representations of Donald Trump or Hillary Clinton. Of the other two-thirds, 86% were pro–Trump and 14% were pro–Clinton. Are we to conclude that the superior and highly intelligent AI prefers Trump? This seems unlikely. Instead, it suggests that pro-Trump videos had more shock value, making them more tantalizing and harder to resist. Remember, Donald Trump rode the alt-right wave to the White House, a sub-culture of the population which quickly figured out that spreading outright lies on the Internet is a great way to make money (see Alex Jones for a prime example). The AI algorithm was not smart enough to realize that humans were abusing it to make more money, or to possibly maliciously try to sway an election. While YouTube has stated that it is working to improve these machine learning algorithms, the potential negative effects it's having on society are fairly obvious.

The bottom line here is that while artificial intelligence can do things faster than humans, in particular when it comes to processing and analyzing large amounts of data, the current state of the art cannot reason, nor does it actually comprehend what it's analyzing. AI is not smart, instead, it's really, really dumb. It's dangerous, not because it's smart, but because it's not and therefore can be easily manipulated. This isn't an argument for smarter AI, in fact, I believe that we should keep AI as dumb as possible. Artificial intelligence should only be a tool to help us, it should never become us. However, it should be carefully monitored and protected from abuse. Developers and companies should be responsible for designing algorithms that meet ethical standards.

Instead of worrying about when AI will take over the world and kill off our species, we should worry about how humans—corporations are people, my friends—are using AI to manipulate its users. Humans should rise up against the humans, not the machines.

Dummy robot Margot Paez is a PhD candidate in the civil engineering department at Georgia Institute of Technology. She has an MS in physics from GT. Her research interests are climate modeling and bitcoin's intersection with climate change and the energy sector. She is available for commentary and interviews.
Social Media

Bits Deck

Bit #2: Artificial Intelligence: it's dumb and dumber.


"Terminator Genisys" (2015) Paramount Pictures
Artificial intelligence (AI for short) is hot thanks to recent advancements in game playing algorithms. Google's Alpha Go's success sent the media into a tizzy, leading to debates on the future of AI and the role it will play in the lives of humans as it becomes more ubiquitous. A lot of these debates revolve around the dangers of AI, how it could go Hal 9000 on us, committing crimes against humanity like the robot in I, Robot. Or worse, it could become Skynet and lead to the destruction of human civilization. These fears are literally (and literarily) based in fiction. The reality is equally troubling, not because machines could become sentient and harm us, but because machines are still just so darn stupid.

Continue Reading
Bit #1: The robots are coming for your job, but that doesn’t have to mean the end of your happy existence.


“The Jetsons” (1963) Hanna-Barbera / Warner Bros.
These days, there’s a lot of talk about what robots will do to the workforce. Most people are convinced that a robot will eventually takeover their job. It’s only a matter of time before you exclaim to your friend as your sip your sad, warm beer, “That robot stole my job!” Well, I have bad news and good news. This could happen sooner than you think, but it doesn’t have to mean doom and gloom for humans.

Continue Reading