Daniel Liang looks at a familiar world in an unfamiliar way – through a skeptical lens. Every month he peeks under the hood of a meme, myth, bias, or news article. Disclaimer: the opinions expressed do not represent the magazine, advertisers, employer, or the makers of either AlphaGo or Go.
AlphaGo And Artificial Intelligence
The game of Go (围棋) originated in China over 2500 years ago, and is one of the oldest board games played today. Behind the simple rules is a deceivingly complex game, requiring intuition and strategic thinking. After IBM’s Deep Blue defeated Kasparov in chess 20 years ago, Go was considered the last game in which humans still had the upper hand against computers.
Should We Worry About Artificial Intelligence?
This apparent loss of humanity’s last stand against Artificial Intelligence (AI) has brought another provocative question to the forefront, one that you have undoubtedly heard before: is AI an existential threat to humanity? Let’s take a look.
Many heavyweights such as Stephen Hawking, Elon Musk, and Bill Gates have expressed concern that AI could spell the end of the human race. First of all, it is worth noting that the type of AI being referring to is Artificial General Intelligence (strong AI), which is a hypothetical machine that can perform any intellectual task that a human can. It does not currently exist, and may not ever. Strong AI straddles the realms of science and fiction, and the implications are deep and philosophical. However, that discussion is for another day.
What does exist today is AI geared towards a specific, narrow task (weak AI), such as playing Go, driving a car, navigation, even predicting pregnancies. We already use weak AI extensively in our everyday life, and the risks are not only real but often go unnoticed.
What Are Some of the Risks of Weak AI?
AI machines are immune to human shortcomings such as distraction, bias, fatigue, and calculation errors, which makes them perfect for repetitive and well-defined tasks. In fact, they are so good at these specific tasks that we willingly and prematurely delegate our responsibilities to them; we happily embrace every incremental improvement, each step reasonable on its own, until we have eventually delegated our skills away, much like our privacy and liberties. Far from merely being an inconvenience, this loss of skills is often a matter of life and death. For example, many airplanes have crashed because pilots have become over-reliant on autopilot and lost their flying skills, such as when Asiana Airlines crashed into the seawall on a clear day in San Francisco in 2013.
Another risk is about responsibility. Let’s look at a scenario: A self-driving car is cruising down a 2 lane road, when a large tree suddenly falls and blocks the lane. The car can avoid harm to its passenger by swerving into the other lane, which is unfortunately occupied by bicyclists. The AI will act based on what it was programmed to do; the passenger’s fate is predetermined. The important question is, by whom? Who gets to play god – the programmer, the passenger, ethicists, or someone else?
With AI taking the uncertainty of execution out of the equation, we can no longer hide behind the intentional vagueness that we so want to preserve, for the harm is no longer accidental, but premeditated. As we are forced to write down rules and preferences for these difficult situations, we lose our treasured hypocrisies and lay naked our collective biases; biases tacitly accepted but rather left unspoken.
We want to cede authority to AI, yet the accompanying responsibilities are far more difficult and uncomfortable to shirk. If we stick our heads in the sand and simply allow AI to make increasingly important decisions for us, we relinquish not only our right of choice, but also our values behind them. We risk relegating responsibility to parties who may not be qualified, and perhaps more appallingly, without our consent.
Throughout history, humans have built and used tools of increasing power, from sticks, energy, abstract thought, information, to simulation of intelligence, which is AI. With this immensely powerful tool of AI we have today, the real risks are not found in the machine but in the mirror. In our reflection we see expanding waistlines because it’s easier to drive than walk, and shrinking brains because it’s easier to accept a thought than think for ourselves. We crave the conveniences the tool provides, yet pout when we have to read the operating manual; and as the machines have gotten artificially intelligent, we humans have become artificially stupid.
Some mourn the loss of what was considered humanity’s last stand in board games against AI. I beg to differ. Even if AlphaGo were unbeatable, the game of Go would not be relegated to the trash bin. Conversely, AlphaGo is already expanding our understanding of the game and opening up possibilities, which is exactly what happened to chess. And in my opinion, a tiny and temporary dent in our imagined superiority is a small price to pay for a profound and lasting improvement. After all, when a teacher is beat by his student, he loses face but gains pride – an emotion not found in the cold silicon hardware and software of AI, but in the warm, squishy wetware between our ears.
Wielded carefully, AI is a powerful tool that will drive the betterment of mankind. It is something to embrace, not with blind fervor but with prudence and reflection. Until the day strong AI emerges, that is what I think AI should be to us; a complement, not a replacement.
- Go (game), Wikipedia
- Deep Blue versus Garry Kasparov, Wikipedia
- Google DeepMind official site
- Mastering the game of Go with deep neural networks and tree search, Nature 2016
- Google DeepMind Challenge Match Press Release, Google DeepMind
- Lee Sedol, Wikipedia
- The Sadness and Beauty of Watching Google’s AI Play Go, Wired
- S. Korean Go player confident of beating Google’s AI, Yonhap News Agency
- Google’s AlphaGo gets ‘divine’ Go ranking, phys.org
- Go Ratings, goratings.org
- Ke Jie, Wikipedia
- Elon Musk, Stephen Hawking, Google researchers join forces to avoid ‘pitfalls’ of artificial intelligence, The Washington Post
- Elon Musk: ‘With artificial intelligence we are summoning the demon.’, The Washington Post
- Artificial general intelligence, Wikipedia
- The Doomsday Invention, The New Yorker
- What’s Even Creepier Than Target Guessing That You’re Pregnant?, Slate
- Concerns raised about overreliance on autopilot, Alaska Dispatch News
- Asiana Airlines Flight 214, Wikipedia
- The Moral Challenges of Driverless Cars, Communications of the ACM
点此阅读中文: Chinese (Simplified)