1
Humans can still beat AI at video games
Life experience comes in really handy when learning a new game. Tatiana Maksimova via Getty Images Get the Popular Science daily newsletterš” Breakthroughs, discoveries, and DIY tips sent six days a week. Terms of Service and Privacy Policy. Ask someone to chart the progression of artificial intelligence (AI) models over the past few decades and youāll likely hear some reference to how good they are at playing games. IBM shocked the world in 1997 when its Deep Blue model vanquished chess grandmaster Garry Kasparov at his own domain. Nearly two decades later, Googleās AlphaGo model trounced a human champion of the game Go , a feat some thought impossible at the time. Since then, increasingly data rich AI models have graduated from board games to video games. Various models have used a training method called reinforcement learning āa technique that also plays a key role in training AI chatbots like ChatGPTāto teach machines how to learn and outperform humans at a range of Atari games .More recently, reinforcement learning has taught machines how to master incredibly complex strategy games including Dota 2 and Starcraft II . But thereās one area of gaming remainingāat least for nowāwhere computers still canāt hold a candle to flesh and bone humans. They are still not great at learning different kinds of more open-ended games quickly. When it comes to picking up a random title from a game store that they havenāt seen before and getting the gist, human gamers still learn the ropes much quicker than even the most advanced AI models. Thatās the key argument made in a recent paper authored by New York University computer science professor Julian Togelius and his colleagues. They note this distinction isnāt just a pat on the back for Homo sapiens . It may also shed light on a key element of what makes human intelligence so unique and why AI still has a long way to go before it can truly claim human-level intelligenceālet alone surpass it. āIf you pit an LLM [large language model] against a game it has not seen before, the result is almost certain failure,ā the authors write. AI has been hooked on games from the beginning Games have been useful testbeds for AI models for decades because they typically have predictable rules, defined goals, and varying mechanics. Those basic tenets track particularly well for reinforcement learning, where a model plays a game in simulation over and over againāsometimes millions of timesāusing trial and error to gradually improve until it reaches proficiency. This, in a basic sense, was how DeepMind was able to master Atari games in 2015. That same logic influences todayās popular large language models, albeit with the entire internet serving as training data. And yet, that method runs into problems when asked to generalize. AI models crush humans at board games and certain video games because the constraints are clear and the goals are relatively straightforward. At the end of the day, Togelius and his colleagues argue that
No comments yet.