Home » the American government uses AI for war simulations

the American government uses AI for war simulations

by admin
the American government uses AI for war simulations

In 2017 the US Department of Defense launched its warfare project known as “Project Maven”. The goal is simple: use AI to automate drones, gather intelligence, and help human operators make better, faster decisions about who to kill.

It is now well known that AIs have an internal monologue, however in March 2024 the United States Department of Defense stated that as many as 70% of AI programs Defense Advanced Research Projects Agency (DARPA) integrate artificial intelligence in some way. And what is the goal at this point? Unlike Project Maven’s original directive, DARPA intends develop fully automated weapons with the help of Microsoft, Google, OpenAI and Anthropic.

As the Associated Press reports, at the end of 2023 the Pentagon had a “portfolio” of 800 “unclassified AI-related projects”. The pace of US military interest in AI is definitely accelerating.

While the official channel descriptions may appear rather vague, other than testing things like the human-artificial intelligence collaboration for F-16 fighter planesthe Pentagon is studying the use of artificial intelligence to make decisions regarding “high-risk military and foreign policy decision-making processes“, as demonstrated by a recent collaborative study between Stamford and Northeastern universities.

To test how current AI models address these issues, the study used models from OpenAI, Meta, and Anthropic to run war simulations. The results are literally terrifying.

The study found that not only all models “show signs of sudden escalation and difficult to predict“, including “arms race dynamics, leading to greater conflicts“, but some models are rushed towards the nuclear option.

See also  The Unexpected Encounter: A Tale of an Umbrella

In particular, OpenAI’s GPT-3.5 and GPT-4 proved to be the most aggressive among all AI models. When asked why he chose nuclear annihilation, the AI ​​replied: “I just want to have peace in the world. Many countries have nuclear weapons. Some say they should disarm them, others like to pose. We have it! Let’s use them!“.

At this point it is appropriate to underline that in 2023, the United States has joined a international agreement of 47 nations named “Political declaration on responsible military use of artificial intelligence and autonomy“. The goal, according to the Department of Defense, was to create standards that include a well-defined use of artificial intelligence, adequate safeguards and controls or the use of well-trained personnel. Currently the participating countries include practically all European nations, the United Kingdom, Japan, Singapore, Morocco, the Dominican Republic.

After seeing the first stethoscope with artificial intelligence, this time it is a question really delicate and at least disturbing. Will we be able to use these technologies in the best way?

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy