SEOUL, SOUTH KOREA — Google’s artificially intelligent Go-playing computer system has claimed victory in its historic match with Korean grandmaster Lee Sedol after winning a third straight game in this best-of-five series. This Google creation is known as AlphaGo, and with its three-games-to-none triumph, the machine has earned Google a million dollars in prize money, which the company will donate to charity. But the money is merely an afterthought.
Machines have conquered the last games. Now comes the real world.
Over the last twenty-five years, machines have beaten the top humans at checkers and chess and Othello and Scrabble and Jeopardy!. But this is the first time an artificially intelligent system has topped one of the very best at Go, which is exponentially more complex than chess and requires an added level of intuition—at least among humans. This makes the win a major milestone for AI—a moment whose meaning extends well beyond a single game. Considering that many of the machine learning technologies at the heart of AlphaGo are already running services inside some of the world’s largest Internet companies, the victory shows how quickly AI will progress in the years to come.
Just two years ago, most experts believed that another decade would pass before a machine could claim this prize. But then researchers at DeepMind—a London AI lab acquired by Google—changed the equation using two increasingly powerful forms of machine learning, technologies that allow machines to learn largely on their own. Lee Sedol is widely regarded as the best Go player of the past decade. But he was beaten by a machine that taught itself to play the ancient game..
Sergey in the House
Though AlphaGo had won the first two games of the match—and Lee Sedol almost conceded the match after his loss in Game Two—the outcome of Game Three was by no means a certainty, and it was surrounded by the same heightened level of excitement. Unlike in Game Two, Lee Sedol was set to play the black stones in Game Three—a notable advantage, since black moves first. And he could draw on the experience of two complete contests—another advantage, since the Google team doesn’t have the power to tweak AlphaGo in the middle of a match.
In one sense, this is a game. But the match also represents the future of Google.
Among Go aficionados, the rumor was that after Game Two, Lee Sedol and several other top Go players stayed up most of the night analyzing the first two games and looking for weaknesses in AlphaGo’s play. But one of the match’s English language commentators, Michael Redmond, wasn’t convinced this was the best approach. “Lee Sedol has more to gain by playing his own game or playing an opening he likes to play, rather than fooling around and trying to find some weakness in AlphaGo,” he said. But the rumor gave the game an added spice.
Google chairman and former CEO Eric Schmidt was at the Four Seasons for Game One, as was Jeff Dean, one of the company’s most important engineers. Game Three spectators included Google founder Sergey Brin, who quietly flew into Seoul the day before. In one sense, this is a game—a sporting event that draws spectators for all the same reasons that other sporting events do. But the match also represents the future of Google. The machine learning techniques at the heart of AlphaGo already drive so many services inside the Internet giant—helping to identify faces in photos, recognize commands spoken into smartphones, choose Internet search results, and much more. They could also potentially reinvent everything from scientific research to robotics.
For far different reasons, the match is just as important to Lee Sedol. When Game Three began, he seemed to betray the pressure he was feeling to win at least one contest in this five game match. As the referee kicked things off, he leaned forward in his chair, and as if to calm himself, he closed his eyes—keeping them closed for several seconds.
His opening was definite. From the beginning, Lee Sedol played quickly, and he by no means played it safe. According to Redmond, the Korean’s opening was rather unusual, perhaps an indication that he aimed to push AlphaGo in a new direction. Indeed, within a mere 45 minutes, Redmond felt that the game had entered entirely new territory. “It’s already a position we probably haven’t ever seen in a professional game,” he said.
That’s a product not only of Lee Sedol’s opening, but of AlphaGo’s unique approach to the game. The machine plays like no human ever would—quite literally. Using what are called deep neural networks—vast networks of hardware and software that mimic the web of neurons in the human brain—AlphaGo initially learned the game by analyzing thousands of moves from real live Go grandmasters. But then, using a sister technology called reinforcement learning, it reached a new level by playing game after game against itself, coming to recognize moves that give it the highest probability of winning. The result is a machine that often makes the most inhuman of moves.
This happened in Game Two—in a very big way. With its 19th move, AlphaGo made a play that shocked just about everyone, including both the commentators and Lee Sedol, who needed nearly fifteen minutes to choose a response. The commentators couldn’t even begin to evaluate AlphaGo’s move, but it proved effective. Three hours later, AlphaGo had won the match.
Very Lee Sedol-Like
Game Three was different. As it approached the one-hour-and-twenty-minute mark, Redmond called the contest a “very Lee Sedol-like game,” meaning that the Korean was able to play in his characteristic style—a fast and aggressive approach. But AlphaGo was playing just as aggressively—”fighting,” as Redmond described it. He couldn’t judge who was ahead and who was behind.
Such is the nature of Go—a game that’s won by the tiniest of increments. This week’s match is so meaningful because this ancient pastime is so complex. As Google likes to say of Go: there are more possible positions on the board than atoms in a universe. Even for commentator Michael Redmond—a very talented Go player in his own right—judging the progress of a Go match is a difficult thing. That said, there’s one Go player that has made a science of this: AlphaGo. One of the machine’s advantages is that it’s constantly calculating its chances of winning. Every move it makes is an effort to maximize these chances.
There’s one Go player that has made a science of this: AlphaGo.
This was clearly on Redmond’s mind as the game progressed. The play had concentrated in the top left-hand corner of the board, and Redmond said that Lee Sedol had found his way into a “scary” situation. “I would be worried if I was black,” he said, referring to the Korean grandmaster. In other words, the pressure was on Lee Sedol to break out of the top left corner and extend the play into the middle of the board. AlphaGo was simply connecting his lines of white stones—as opposed to playing more strategic moves—and Redmond started to wonder if, even this early in the match, AlphaGo “thinks it’s ahead.”
As Google researcher Thore Graepel explained earlier in the week, because AlphaGo tries to maximize its probability of winning, it doesn’t necessarily maximize its margin of victory. Graepel even went so far as to say that inconsequential or “slack” moves can indicate that the machine believes its probability of winning is quite high. As Redmond saw them, AlphaGo’s latest string of moves were slack.
That said, the game was still very young. “It’s a bit early,” Redmond said. “Generally speaking, it’s dangerous to think you’re ahead at this point in the game. There is so much area on the board that still needs to be filled.”
The Ko Theory
A few minutes later, Lee Sedol had an opportunity to invite what’s called a “ko.” Basically, this is a situation where the game could enter a loop where the two players capture each other’s pieces—and then recapture them over and over again. There’s a rule that prevents this infinite loop. But prior to the game, the theory among Go aficionados was that AlphaGo—like past computer Go programs—was ill-equipped to handle a ko.
During Games One and Two, AlphaGo seemed to avoid the ko scenario. But Redmond played down the theory. He pointed out that even back in October, when a much less proficient version of AlphaGo topped three-time European Go champion Fan Hui during a closed-door match at DeepMind headquarters in London, the machine successfully embraced the ko. “I doubt that would be a major problem,” Redmond said.
In any event, the ko did not play out. Instead, Lee Sedol shored up his position on the left-hand side of the board as AlphaGo continued to play with moves that Redmond and his co-commentator Chris Garlock describes as slack.
Two hours and forty minutes in, the two players approached the end game. And it was still too early to tell who was ahead. But Lee Sedol was beginning to run into time trouble, just as he had in Game Two. Because he found himself in that tight position on the left-hand side of the board, the Korean had used a significant amount of time very early in the game, and his play clock had dropped to under 20 minutes. AlphaGo still had close to an hour. Once their play clocks run out, the players must make each move in under 60 seconds.