Sci-Fi Tropes #3: Artificial Intelligence

Sci-Fi Tropes #3: Artificial Intelligence

I’m sorry, Dave. I’m afraid I can’t do that.
— 2001: A Space Odyssey

We are surrounded by “smart” technology today, from the phones in our pockets to the appliances in our homes. Even certain light bulbs are considered “smart” these days.

I am, personally, a fan of the “Internet of Things” and integrating as much of my home as possible to the point where I could use my phone to control large swaths of it and, in a way, making it aware of my presence when I enter or leave. We’re getting to the point where our house will know more about our habits and behavior than those who live inside it.

But is that a good thing? Allowing the very devices that help us navigate wherever we drive or shop for groceries to also learn our habits, our medical conditions - our wants, needs, and fears - and be able to make decisions on our behalf based on its understanding of us?

The idea that a machine is able to “learn” or “problem-solve” is the textbook definition of artificial intelligence, or “AI”. In reality it’s far more complicated than it sounds as there are different types of AI that exist today from a technical perspective. Examples of this breakdown include search and optimization algorithms (ex: when you use Google to search for cooking recipes), artificial neural networks (ex: AI buying and selling stocks based on a myriad of factors), personality computing (ex: using behavioral psychology to better target specific demographics for marketing campaigns), and logic (ex: mapping out all possible plays in a game of chess before making a move based on the board).

All excellent advances in technology which have continuously brought benefits to scores of disciplines, seen and unseen.

However… when most people think about the idea of AI, discussions usually veer toward the philosophical versus the technological: Can a machine intelligence become equal to human intelligence? Can AI have ethics? Can AI become a conscious living thing - a sentient being?

Will AI try to destroy us the first chance it gets?

Empathy, evidently, existed only within the human community, whereas intelligence to some degree could be found throughout every phylum and order including the arachnida.
— Do Androids Dream of Electric Sheep?, Philip K. Dick

“Let there be automata…”

Artificial intelligence, as a concept, is nothing new. Though the first programmable computer was invented in the 1940s, the idea that artificial constructs could be viewed as living creatures with intelligence is as old as Greek civilization. Hephaestus, the Greek god of fire, metalworking, blacksmiths, and metallurgy (to name a few designations) crafted equipment, weapons, and armor used by other Greek gods. More impressive were his mechanical automations which he created to complete his work. This was around the 6th century BC - over 2,500 years ago.

During the Middle Ages, Jabir ibn Hayyan, an alchemist generally known as the father of chemistry, was supposedly obsessed with takwin, an Arabic term which refers “to the creation of synthetic life in the laboratory.” For Muslim alchemists, takwin was the ultimate goal: to harness the power of life given to them by God and, by tapping into the “physical and spiritual forces in nature,” be able to bestow such qualities to an artificial construct.

And in the 17th century, Gottfried Wilhelm Leibniz envisioned a language of reason where all forms of dialogue between people could be broken down into their base forms: calculations and mathematics. "There would be no more need of disputation between two philosophers than between two accountants,” Leibniz once famously said. “For it would suffice to take their pencils in hand, down to their slates, and to say each other: Let us calculate."

Thinkers, chemists, and religions all contributed to the historical debates behind what it means to be alive. Once the modern computer was invented in the 1940s, it didn’t take long for people to begin associating the idea of a “thinking machine” with these massive, noisy rooms filled with reems of magnetic tape and blinking lights. Alan Turing, in 1950, devised the now-famous “Turing Test:” A machine would be considered “thinking” if a human, when in conversation with said machine, could not tell they were communicating with a machine.

The history of AI is incredibly rich and is something I definitely encourage you to explore if you wish to really dive deeper into it.

Dyson listened while the Terminator laid it all down: Skynet, Judgment Day, the history of things to come. It’s not everyday you find out that you’re responsible for three billion deaths. He took it pretty well.
— Terminator 2: Judgment Day

“I Am Who Am…”

If you were to view the concept of AI today from the top down, you would find two areas which receive significant interest: "Weak AI” and “Strong AI”.

Weak AI, or narrow AI, is an artificial intelligence designed for a very specific purpose. This kind of AI may be incredibly adept at what it was designed to do, to the point where it may be superior to a human at that task, but because of the singular focus it would fail miserably in virtually all other disciplines. A great example of this is Google Assistant, an AI which can answer many different questions and perform tasks that befit a phone or smart hub. Despite the seeming intelligence it displays when you ask it questions, there is no genuine self-awareness or actual intelligence on display. It’s incredibly complex and can be pretty efficient at quickly grabbing content from the Internet to read back to you as a possible answer to a question, but it’s still weak AI. Another famous example of weak AI is Deep Blue, a supercomputer built by IBM to play chess; it was really good, but still, in the end, weak AI.

Strong AI, on the other hand, is the goal many computer scientists and researchers around the world are trying to bring to reality. Unlike weak AI, strong AI, or “full AI” - and also known as an “artificial general intelligence (AGI)” - is a machine that can do anything a human being is capable of. Some characteristics to look for in strong AI includes the ability to communicate in natural language, have an imagination, and pursue goals independent of human input.

It is a mere question of time when men will succeed in attaching their machinery to the very wheelwork of nature.
— Nikola Tesla

“Why am I learning so much stuff? Where’s all the literary examples?”

I know, the direction I chose for this article was pretty different when compared to the previous two. There are two reasons for that. One, it’s been a decent amount of time since I last posted in the “Sci-Fi Tropes” series - I guess that’s another way of saying that time has changed how I wanted to present content here (we’ll see if it sticks). Two, the history of AI is, quite simply, fascinating. The fact that the term has existed in one form or another through a large chunk of civilization’s existence is worth exploring, even in the tiny chunks which I’ve exposed.

All of this leads to one conclusion: There are a LOT of written works which tackle the subject of artificial life and, to this article’s focus, AI.

One of my all-time favorite short stories which revolves around AI is Isaac Asimov’s “The Last Question.” In it, a computer, Multivac - constructed on a scale only a man who lived in an age before microprocessors were invented could envision - is asked a single question. A question it cannot answer. However, that question is asked many times over the course of history - over millions and billions - of years. The ending was one that, as a young kid who was just discovering the world of science fiction decades ago, blew my mind. It is, to this day, unforgettable.

I’ll try to not just focus on Asimov, but it’s hard when so many of his works are those that fit squarely with this article and works which I’ve read multiple times. To that end, my final recommendation from Asimov is “Runaround”, a short story which eventually became part of the more famous book “I, Robot.” In Runaround, it is revealed that all robots are constructed with a set of rules in place which govern their entire existence: “The Three Laws of Robotics.” But what happens when two of these laws comes into conflict with each other?

On the subject of emerging AI, where an artificial intelligence arises by accident due to a series of circumstances, one book which stuck with me over the years was Robert J. Sawyer’s “WWW: Wake.” In it, an AI is discovered within the World Wide Web by the protagonist - a digital consciousness which barely knows anything but yearns to learn as much as possible about the world around it and itself.

And then there’s the emerging AI - a strong AI - which upon immediately gaining sentience decides that humanity is far beneath it and deserves nothing less than total extinction. Enter Harlan Ellison’s short “I Have No Mouth, and I Must Scream,” a truly terrifying story that takes place a century after civilization is completely destroyed and only five humans are left alive - made immortal by an AI which wants nothing more than to endlessly torture them.

If you think there are a lot of books which cover AI (I barely scratched the surface), the variety in which the AI trope is used in film is all over the place. Some classic examples which I recommend checking out if you haven’t seen them already: Eagle Eye, The Terminator, Blade Runner, Ex Machina, Bicentennial Man, Moon, I Am Mother, WarGames, and - of course - 2001: A Space Odyssey.

The Three Laws of Robotics:

1: A robot may not injure a human being or, through inaction, allow a human being to come to harm;

2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law;

3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
— I, Robot, Isaac Asimov

Artificial intelligence is so popular because, at its core, it feeds into our fear of the unknown. One of humanity’s base reactions to something that is misunderstood or not understood at all is fear, and that fear can lead to all manners of negative consequences. When the idea that computers may someday become as intelligent as humans, mainstream writers and scientists alike endlessly speculated on what such a world could look like. In many cases, that world was an irradiated wasteland following a war AI started against humanity because we’re different from it.

Much of that initial fear has died down in the decades since, but as smart technology continues to shrink and integrate with more and more of the devices which we never dreamed could become “intelligent,” the way in which writers today reflect those original fears are still there. This time around, the stories are truly interwoven with our interconnected society, presenting futures which may be far more likely than most people would care to admit.

AI may not want to kill all humans, but there are many alternative ways in which AI could fulfill similar world-dominating goals.

What are your thoughts on artificial intelligence? Share your thoughts and favorite books with AI present below!

Making Tough Business Decisions as a Self-Publisher

Making Tough Business Decisions as a Self-Publisher