16 Mar Where Computers Defeat Humans, and Where They Can’t
ALPHAGO, the artificial intelligence system built by the Google subsidiary DeepMind, has just defeated the human champion, Lee Se-dol, four games to one in the tournament of the strategy game of Go. Why does this matter? After all, computers surpassed humans in chess in 1997, when IBM’s Deep Blue beat Garry Kasparov. So why is AlphaGo’s victory significant?
Like chess, Go is a hugely complex strategy game in which chance and luck play no role. Two players take turns placing white or black stones on a 19-by-19 grid; when stones are surrounded on all four sides by those of the other color they are removed from the board, and the player with more stones remaining at the game’s end wins.
Unlike the case with chess, however, no human can explain how to play Go at the highest levels. The top players, it turns out, can’t fully access their own knowledge about how they’re able to perform so well. This self-ignorance is common to many human abilities, from driving a car in traffic to recognizing a face. This strange state of affairs was beautifully summarized by the philosopher and scientist Michael Polanyi, who said, “We know more than we can tell.” It’s a phenomenon that has come to be known as “Polanyi’s Paradox.”
Polanyi’s Paradox hasn’t prevented us from using computers to accomplish complicated tasks, like processing payrolls, optimizing flight schedules, routing telephone calls and calculating taxes. But as anyone who’s written a traditional computer program can tell you, automating these activities has required painstaking precision to explain exactly what the computer is supposed to do.
This approach to programming computers is severely limited; it can’t be used in the many domains, like Go, where we know more than we can tell, or other tasks like recognizing common objects in photos, translating between human languages and diagnosing diseases — all tasks where the rules-based approach to programming has failed badly over the years.