Shall We Play a Game? Language Models for Open-ended Wargames
Wargames are simulations of conflicts in which participants' decisions influence future events. While casual wargaming can be used for entertainment or socialization, serious wargaming is used by experts to explore strategic implications of decision-making and experiential learning. In this paper, we take the position that Artificial Intelligence (AI) systems, such as Language Models (LMs), are rapidly approaching human-expert capability for strategic planning -- and will one day surpass it. Military organizations have begun using LMs to provide insights into the consequences of real-world decisions during _open-ended wargames_ which use natural language to convey actions and outcomes. We argue the ability for AI systems to influence large-scale decisions motivates additional research into the safety, interpretability, and explainability of AI in open-ended wargames. To demonstrate, we conduct a scoping literature review with a curated selection of 100 unclassified studies on AI in wargames, and construct a novel ontology of open-endedness using the creativity afforded to players, adjudicators, and the novelty provided to observers. Drawing from this body of work, we distill a set of practical recommendations and critical safety considerations for deploying AI in open-ended wargames across common domains. We conclude by presenting the community with a set of high-impact open research challenges for future work
We don’t need agi or superintelligence for these things to be dangerous. We just need to be willing to hand over our decision making to a machine.
And of course a human can make a wrong call too. In this scenario that’s what is happening. And of course we should bring all of our tools to bear when it comes to evaluating nuclear threats.
But that doesn’t make it less concerning that we’ve now got machines capable of linguistic persuasion in that toolset.
"hand over" is a misnomer - what actually happens is that there's an interaction with a machine and people either trust it too much, or forget that it's a machine (i.e. handed from one person to another and the "AI warning" label is accidentally or intentionally ripped off)
This is not unlikely. This is actually likely.
The instructions for those agents is to find signals that prove there is an attack.
Llms are steered to do what they are requested. They will interpret the signals a strongly as possible.
They will omit counter evidence to achieve their objective.
They will distort analysis to find their objective.
This has been everyone's llm problem daily. How is not that clear yet?
I don't disagree, but just to play devils advocate: the LLM can also be told to look for counter-evidence, and will at least make a stab at doing so. That's more than we can expect from the humans currently in charge.
The big problem here is determining how vigilant those in command are about vetting the AI's responses. This feels like one of those systems that works great until someone vaporizes a hallucinated target that was actually civilians or unintended targets. This should be mitigated by having a MITM, but still. Risky. Humans make mistakes, too, and they're inclined to just "believe what the computer says," so as much as I'd love to believe this ends with a white picket fence scene, my instincts are screaming "dig a bunker, homie."
Since the beginning of the nuclear age, literally billions of dollars have been spent paying incredibly smart people to model all aspects of nuclear war, including the chain of escalation under uncertainty.
Not to discount the importance of this risk, but we’re not likely to sleepwalk into it, barring a collapse in strategic & operational competence in planning (yeah, yeah) that would make MANY risks dangerously severe.
I'd posit the faster we feed LLM exhisting nuclear crisis and invented, dissimilar to its training corpus, nuclear scenarios, the better we will know how wrong they can be.
Fear-mongering isn't lucrative, isn't dopamine triggering, isn't actionable, doesn't look good on the resume, so it's tipically ignored.
https://arxiv.org/abs/2509.17192
Shall We Play a Game? Language Models for Open-ended Wargames
Wargames are simulations of conflicts in which participants' decisions influence future events. While casual wargaming can be used for entertainment or socialization, serious wargaming is used by experts to explore strategic implications of decision-making and experiential learning. In this paper, we take the position that Artificial Intelligence (AI) systems, such as Language Models (LMs), are rapidly approaching human-expert capability for strategic planning -- and will one day surpass it. Military organizations have begun using LMs to provide insights into the consequences of real-world decisions during _open-ended wargames_ which use natural language to convey actions and outcomes. We argue the ability for AI systems to influence large-scale decisions motivates additional research into the safety, interpretability, and explainability of AI in open-ended wargames. To demonstrate, we conduct a scoping literature review with a curated selection of 100 unclassified studies on AI in wargames, and construct a novel ontology of open-endedness using the creativity afforded to players, adjudicators, and the novelty provided to observers. Drawing from this body of work, we distill a set of practical recommendations and critical safety considerations for deploying AI in open-ended wargames across common domains. We conclude by presenting the community with a set of high-impact open research challenges for future work
We don’t need agi or superintelligence for these things to be dangerous. We just need to be willing to hand over our decision making to a machine.
And of course a human can make a wrong call too. In this scenario that’s what is happening. And of course we should bring all of our tools to bear when it comes to evaluating nuclear threats.
But that doesn’t make it less concerning that we’ve now got machines capable of linguistic persuasion in that toolset.
"hand over" is a misnomer - what actually happens is that there's an interaction with a machine and people either trust it too much, or forget that it's a machine (i.e. handed from one person to another and the "AI warning" label is accidentally or intentionally ripped off)
This is not unlikely. This is actually likely. The instructions for those agents is to find signals that prove there is an attack. Llms are steered to do what they are requested. They will interpret the signals a strongly as possible. They will omit counter evidence to achieve their objective. They will distort analysis to find their objective.
This has been everyone's llm problem daily. How is not that clear yet?
I don't disagree, but just to play devils advocate: the LLM can also be told to look for counter-evidence, and will at least make a stab at doing so. That's more than we can expect from the humans currently in charge.
The big problem here is determining how vigilant those in command are about vetting the AI's responses. This feels like one of those systems that works great until someone vaporizes a hallucinated target that was actually civilians or unintended targets. This should be mitigated by having a MITM, but still. Risky. Humans make mistakes, too, and they're inclined to just "believe what the computer says," so as much as I'd love to believe this ends with a white picket fence scene, my instincts are screaming "dig a bunker, homie."
Would you like to play a game?
The quote is "Shall we play a game?”.
“Would you like to play a game?" is from Saw.
No-one got fired for ~buying IBM~ following a statistical-based text output.
In this scenario, ICBMs got fired.
Since the beginning of the nuclear age, literally billions of dollars have been spent paying incredibly smart people to model all aspects of nuclear war, including the chain of escalation under uncertainty.
Not to discount the importance of this risk, but we’re not likely to sleepwalk into it, barring a collapse in strategic & operational competence in planning (yeah, yeah) that would make MANY risks dangerously severe.
I'd posit the faster we feed LLM exhisting nuclear crisis and invented, dissimilar to its training corpus, nuclear scenarios, the better we will know how wrong they can be. Fear-mongering isn't lucrative, isn't dopamine triggering, isn't actionable, doesn't look good on the resume, so it's tipically ignored.
> Fear-mongering isn't lucrative, isn't dopamine triggering
Isn't it? Isn't fear-mongering one of the main selling points for news-media? And a driving factor of engagement in social media?