For hundreds, perhaps thousands, of years, people have played games to challenge each other, to while time during idle evenings and, they hoped, to keep their minds sharp.
Now we have a growing body of research that suggests that, before long, no human will be able to beat a well-trained computer at any board game, anywhere, ever.
In 1989, a professor at the University of Alberta designed a program called Chinook to play checkers, a board game with 88 spaces, two sets of 12 round tiles and the goal of each player of one set of the tiles to wipe out his/her opponent.
I played a lot of checkers in my childhood and got better with practice and age. Chinook also played a lot of checkers -- 500 billion games, in fact -- and it never forgot any move it learned in practice.
By the summer of 2007, Chinook could beat any human player who made even a single misplay. The best a person could do was play a perfect game and wind up with a draw.
(By now, Chinook may have played many billions more games and come up with strategies to beat any human at any checkers game, under any circumstances. Who knows?)
Chess of course is devilishly more complicated than checkers. While played on a similar 88-space game board, it has two sets of 16, not 12, markers. Each has several movement options for every play.
In 1985, two students at Carnegie Mellon University began grooming a machine and program called Chip Test to play chess. Four years later, IBM hired the team and took on the project, renaming it Deep Blue. Progress came in fits and starts, but Deep Blue marched forward.
IBM's pockets were no doubt deeper than those of the academics who worked on the chess challenge, and so by 1996, Deep Blue was judged ready to take on the human chess champion.
Deep Blue lost.
But Deep Blue was "game." It learned more and tried again. By 1997, Deep Blue's programmers knew that it could evaluate as many as 200 million potential moves in a single second.
"The grand chessmaster (Garry Kasparov) won the first game, Deep Blue took the next one,
and the two players drew the three following games. Game 6 ended the match with a
crushing defeat of the champion by Deep Blue."
After crushing in chess, IBM's next wizard computer, Watson, managed to defeat the two highest-scoring champions of the television game show Jeopardy.
As in each previous challenge, the computer was adjudged beforehand to be unable to process the volume of different information -- in this case more subtle verbal clues -- as well as the human winners.
Whiners complained that Watson may have been able to punch faster on the answer button, a common complaint among Jeopardy also-rans. But this factor seems to have been ruled out.
In that year, 2011, we learned that computers, with enough practice, could beat us at Jeopardy too.
Still, this did not prepare me for the latest reports.
Two-Hand, Heads-up Limit, Texas Hold'Em
The fellows at the University of Alberta seem, once again, to have broken the code. Their explanation:
"The solutions for imperfect information games require computers to handle the additional
complication of not knowing exactly what the game's status is, such as not knowing an
opponent's hand. Such techniques require more computer memory and computing power."
Research into this challenge apparently consisted of pitching two computers against each other in Texas Hold'em games many millions of times. A winning strategy emerged.
Surely there are many human players -- in Las Vegas, Atlantic City and Macau -- with even more millions of games of poker experience than computers have had time to play. Surely they have had the experience to learn enough to prevail in virtually any situation. But no.
Unfortunately, humans are not computers. We do the same thing many times and sometimes learn from our experience.
Just not often enough.