Study Shows Strategic Weakness in AI During Deception Games

Share this content

Published:
November 5, 2025

Researchers Mustafa Karabag and Ufuk Topcu from UT Austin found that large language models excel at inferring hidden information but struggle to withhold it, often revealing too much in social deduction games like The Chameleon

Using the board game The Chameleon — in which one hidden player (the Chameleon) must deduce a secret word from cues given by other players while those players aim to identify the Chameleon without revealing the word — the research probed two essential skills: inferring hidden information and withholding critical cues from the opponent. 

While the AI Chameleons managed to guess the word about 87 % of the time, the non-Chameleon AIs performed poorly in protecting the secret: non-Chameleons won just 6 % of the time, far below the theoretical ~23 % baseline for minimal cooperation.

Their findings expose a key weakness in AI’s strategic reasoning, particularly in adversarial or high-stakes scenarios where discretion is critical. 

Check out the full article to learn more.