1
Humans can still beat AI at video games
Life experience comes in really handy when learning a new game. Tatiana Maksimova via Getty Images Get the Popular Science daily newsletterđĄ Breakthroughs, discoveries, and DIY tips sent six days a week. Terms of Service and Privacy Policy. Ask someone to chart the progression of artificial intelligence (AI) models over the past few decades and youâll likely hear some reference to how good they are at playing games. IBM shocked the world in 1997 when its Deep Blue model vanquished chess grandmaster Garry Kasparov at his own domain. Nearly two decades later, Googleâs AlphaGo model trounced a human champion of the game Go , a feat some thought impossible at the time. Since then, increasingly data rich AI models have graduated from board games to video games. Various models have used a training method called reinforcement learning âa technique that also plays a key role in training AI chatbots like ChatGPTâto teach machines how to learn and outperform humans at a range of Atari games .More recently, reinforcement learning has taught machines how to master incredibly complex strategy games including Dota 2 and Starcraft II . But thereâs one area of gaming remainingâat least for nowâwhere computers still canât hold a candle to flesh and bone humans. They are still not great at learning different kinds of more open-ended games quickly. When it comes to picking up a random title from a game store that they havenât seen before and getting the gist, human gamers still learn the ropes much quicker than even the most advanced AI models. Thatâs the key argument made in a recent paper authored by New York University computer science professor Julian Togelius and his colleagues. They note this distinction isnât just a pat on the back for Homo sapiens . It may also shed light on a key element of what makes human intelligence so unique and why AI still has a long way to go before it can truly claim human-level intelligenceâlet alone surpass it. âIf you pit an LLM [large language model] against a game it has not seen before, the result is almost certain failure,â the authors write. AI has been hooked on games from the beginning Games have been useful testbeds for AI models for decades because they typically have predictable rules, defined goals, and varying mechanics. Those basic tenets track particularly well for reinforcement learning, where a model plays a game in simulation over and over againâsometimes millions of timesâusing trial and error to gradually improve until it reaches proficiency. This, in a basic sense, was how DeepMind was able to master Atari games in 2015. That same logic influences todayâs popular large language models, albeit with the entire internet serving as training data. And yet, that method runs into problems when asked to generalize. AI models crush humans at board games and certain video games because the constraints are clear and the goals are relatively straightforward. At the end of the day, Togelius and his colleagues argue that