1

Life experience comes in really handy when learning a new game. Tatiana Maksimova via Getty Images Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent six days a week. Terms of Service and Privacy Policy. Ask someone to chart the progression of artificial intelligence (AI) models over the past few decades and you’ll likely hear some reference to how good they are at playing games. IBM shocked the world in 1997 when its Deep Blue model vanquished chess grandmaster Garry Kasparov at his own domain. Nearly two decades later, Google’s AlphaGo model trounced a human champion of the game Go , a feat some thought impossible at the time. Since then, increasingly data rich AI models have graduated from board games to video games. Various models have used a training method called reinforcement learning —a technique that also plays a key role in training AI chatbots like ChatGPT—to teach machines how to learn and outperform humans at a range of Atari games .More recently, reinforcement learning has taught machines how to master incredibly complex strategy games including Dota 2 and Starcraft II . But there’s one area of gaming remaining—at least for now—where computers still can’t hold a candle to flesh and bone humans. They are still not great at learning different kinds of more open-ended games quickly. When it comes to picking up a random title from a game store that they haven’t seen before and getting the gist, human gamers still learn the ropes much quicker than even the most advanced AI models. That’s the key argument made in a recent paper authored by New York University computer science professor Julian Togelius and his colleagues. They note this distinction isn’t just a pat on the back for Homo sapiens . It may also shed light on a key element of what makes human intelligence so unique and why AI still has a long way to go before it can truly claim human-level intelligence—let alone surpass it. “If you pit an LLM [large language model] against a game it has not seen before, the result is almost certain failure,” the authors write. AI has been hooked on games from the beginning Games have been useful testbeds for AI models for decades because they typically have predictable rules, defined goals, and varying mechanics. Those basic tenets track particularly well for reinforcement learning, where a model plays a game in simulation over and over again—sometimes millions of times—using trial and error to gradually improve until it reaches proficiency. This, in a basic sense, was how DeepMind was able to master Atari games in 2015. That same logic influences today’s popular large language models, albeit with the entire internet serving as training data. And yet, that method runs into problems when asked to generalize. AI models crush humans at board games and certain video games because the constraints are clear and the goals are relatively straightforward. At the end of the day, Togelius and his colleagues argue that