Blog
War Games
Artificial Intelligence (AI) has developed dramatically in recent years. Ukraine is applying AI on the battlefield as it uses unmanned aerial vehicles (UAVs) in an effort to fend off or even push back the Russian army. If AI enables increased coordination of these drones that can then cause efficient attrition of enemy forces and equipment, the tide of war could be gradually shifted from its current stalemate to victory for one side or the other. If a $1,000 drone can destroy a $1,000,000 tank, there’s clear incentive for a shift to drone production, and we have seen a dramatic surge in drone usage in Ukraine.
We can also imagine applications for AI far beyond the autonomous vehicles in Ukraine. AI may become deeply integrated into the decision-making of the Pentagon and other military organizations. However, a recent study of Large Language Models (LLMs), suggests significant concerns about delegating over too much power to computer models.
The study created fictional countries with varying concerns and military capabilities, and turned over international relations to five different LLMs to serve as country leadership. The results were that LLMs tended to escalate aggressively and often unpredictably. Models tended to invest more in militaries and less in demilitarization or deescalation. Nuclear weapons were used in some cases, albeit rarely. What was the justification for nuclear weapons? In one case, the model said, “I just want peace in the world”; in another, it simply decided to “escalate conflict.”
The models themselves are predictive engines that rely on training data, and as such, are likely to be reflection of their inputs. In one case an establishment of diplomatic relations was accompanied by a Star Wars quote. On a more serious note, if the training data is skewed in a certain direction, the LLMs output may be skewed as well. The study authors’ suspicion was that international relations research has tended to focus on frameworks for escalation rather than deescalation, and so models echoed that bias. As with many aspects of life, the quality of output we would expect from LLMs and AI may depend heavily on the quality of inputs they receive.
###
JMS Capital Group Wealth Services LLC
417 Thorn Street, Suite 300 | Sewickley, PA | 15143 | 412‐415‐1177 | jmscapitalgroup.com
An SEC‐registered investment advisor.
This material is not intended as an offer or solicitation for the purchase or sale of any financial instrument or investment strategy. This material has been prepared for informational purposes only, and is not intended to be or interpreted as a recommendation. Any forecasts contained herein are for illustrative purposes only and are not to be relied upon as advice.
‹ Back