Google learns how to beat you at Atari video games

google computer video game
Google's deep-Q network computer algorithm was able to beat Atari video games like a human.

How smart is Google? It has figured out how to beat Atari.

Google has developed a computer program that can play -- and beat -- the 1980s-era Atari video games.

With an algorithm that Google (GOOGL) calls "deep Q-network," a computer was able to achieve human-level proficiency at more than two-dozen Atari games, ranging from side-scrolling shooter games like River Raid to 3-D car racing games like Enduro.

Researchers for Google described the achievement in a paper published in the journal Nature this week.

Google provided the computer with just the basic level of understanding about how to play the game: The computer was able to "see" the pixels on the screen; it was told what actions the virtual buttons performed; and it was told the score.

What makes the program remarkable is that computers shouldn't be good at video games. Humans can draw on real-life experiences when performing game tasks like driving a car or shooting a gun. Computers typically only understand bits and bytes.

But Google's new program played at least as well, if not better, than a professional human player in 29 of the 49 games it tried. In 43 of the 49 games, Google said deep-Q network outperformed existing machine learning algorithms.

In some games, the Google computer was able to learn strategies that would help it maximize its score. For example, after playing the brick-breaking game Breakout 600 times, deep-Q network learned to tunnel through the bricks and bounce the ball off the back of the wall to knock out bricks from behind.

Google says its algorithm was designed to mimic human learning that takes place in a part of the brain called the hippocampus, which helps us learn from recent experience. Deep-Q network was designed to learn why it lost a round of a video game and to improve its game-play based on its past performance.

The stunning feat recalls IBM's (IBM) Deep Blue chess-playing computer and Watson, the computer that beat the world's best Jeopardy! players. But unlike those two examples, which were designed to beat a specific game, Google's deep-Q network was built to learn how to play any kind of game.

That's why Google has bigger ambitions for deep-Q network's machine learning capabilities. If we want robots to anticipate our needs and cars to drive themselves, computers will have to get better at learning on their own.

CNNMoney Sponsors