Home » The Dawn of the AI-Military Complex

The Dawn of the AI-Military Complex

by admin

In my newsletter I just wrote a text about the emerging rise of the new AI military complex, and translated it for Piqd using ChatGPT and then edited it.

The Dawn of the AI-Military Complex

In December 2022, the Ukrainian army released a video for Russian soldiers with instructions “How to surrender to a drone.” In the video we see “three men in uniform and with white armbands in a trench in a snowy landscape” who are later “led into Ukrainian captivity by a small red quadrocopter.” These drones are most likely (still) operated by humans, but it’s easy to imagine that there will soon be automated versions of them linked to AI-controlled targeting systems. Surrender or die.

Last summer, the Financial Times headlined “How Silicon Valley is helping the Pentagon in the AI ​​arms race” in a piece about how the “US military is approaching defense and weapons startups” about AI technology use. For example, it describes the startup Saildrone, which builds autonomous sailboats that collect data for an “unparalleled database of ocean maps,” which are then analyzed using machine learning and used for climate research. Additionally, in 2021, Saildrone was a key contractor for the US Navy, helping it develop a fleet of artificial AI systems to conduct surveillance in international waters, including the Arctic Ocean around Russia and the South China Sea.”

Saildrone is far from the only company leveraging advances in civilian applications of AI technology to land lucrative government contracts with the military.

In its annual Unitas exercise, the U.S. Navy brought “swarms of air and sea drones (that) collected and shared reconnaissance data with each other, helping a multinational fleet more quickly detect, identify and eliminate enemy vehicles.” In this so-called Replicator program, the US military is also designing “a thousand ‘robotic wingmen’ to support manned aircraft” or “thousands of ‘smart satellites’ that use AI to navigate units and track enemies.” Elon Musk’s SpaceX and Jeff Bezos’ Blue Origin are certainly serious candidates to put these satellites into orbit, while Eric Schmidt is hiding his new military AI startup “White Stork” for “Suicide Drones” behind a “matrioschka of LLCs.” has. “White Stork” is “a nod to the national bird and sacred totem of Ukraine, where Schmidt has taken on the role of defense technology consultant and financier.”

See also  This is what a total eclipse looks like from space

What is emerging here is an entire pipeline for AI-driven, semi-autonomous decision-making on the battlefield: air and sea drone swarms to gather information, generative AI to analyze that information for target selection, and finally autonomous air drone systems for strikes. All a human has to do here is confirm the system’s output and convert it into a command.

Autonomous weapon systems are under the supervision of the United Nations, and a November 2023 resolution on autonomous weapon systems stated that “an algorithm shall not have complete control over decisions affecting the killing or harm of human beings” and that “the principle of human “Responsibility and accountability must be maintained for all use of lethal force, regardless of the type of weapon system involved.” It would not be the first UN resolution to be violated by bad actors.

The US has its own regulation of autonomous weapons in DoD Directive 3000.09, which was just updated in 2023. Human Rights Watch reviewed this updated directive and found an interesting detail: the “definition preserves the previous definition of an autonomous weapon system from the 2012 directive, namely ‘a weapon system that, once activated, can select and attack targets without further intervention by a human operator'” but “the updated 2023 directive removes the word ‘human’ from ‘operator'” and “defines an ‘operator’ as ‘a person who operates a platform or weapons system’.”

The pipeline described above, in which autonomous drone swarms collect data that is then analyzed for target creation to deploy autonomous drones for lethal attacks, can easily be viewed as a single platform. This means that a human cannot specify a target for an autonomous weapon and press a button, as was previously the case. Instead, it only needs to specify an approximate geographic area to be searched by computer vision systems in drone swarms for combatants and enemy units, and the entire pipeline from aiming to firing can be fully automated. The operator only has to trust the system and its automatically generated targets and have the execution confirmed by a commander.

The word trust is crucial here: A 2022 study found that people overly trust AI systems in ethical decision-making processes, while a preliminary scientific publication from October 2023 found a “strong tendency to over-trust unreliable AI for life-critical decisions under uncertain circumstances.” , in a series of experiments examining “confidence in the recommendations of artificial agents regarding decisions to kill.” The implications are obvious: Autonomous weapon systems are being developed with ever simpler and broader methods of identifying, selecting, and killing targets, and soldiers are placing excessive trust in these automated systems in life-critical situations. The minor change in US DoD Directive 3000.09, which removed the word “human” before “operator”, paved the regulatory path for this.

See also  Gmail ends spam and promotional emails: this is the solution that is on the way

At least some of a pipeline like this is already being used in real war situations on the battlefield. In December 2023, the Guardian, citing reports from Israeli-Palestinian publication +972 Magazine and Hebrew-language media Local Call, revealed “The Gospel: how Israel is using AI to select bomb targets in the Gaza Strip.” “The Gospel” is an “AI-powered military intelligence unit that plays a significant role in Israel’s response to the Hamas massacre in southern Israel on October 7.”

In short, it is an automatic targeting system that sucks up all kinds of intelligence data, including “drone footage, intercepted communications, surveillance data and information from observing the movements and behavior patterns of individuals and large groups.” This allowed the IDF to increase the number of targets generated from 50 per year to 100 per day, which are “persons authorized to be assassinated.” The system has also been described as a “‘mass murder factory’ in which the emphasis is on quantity rather than quality. ‘A human eye will examine the targets before each attack, but will not take much time doing so'”. This sounds to me a lot like a “strong bias toward over-reliance on unreliable AI when making life-critical decisions” from the aforementioned study.

Palantir, Peter Thiel’s AI warfare company, just announced a “strategic partnership” with Israel to leverage the company’s “advanced technology to support war-related missions,” and it’s safe to assume “The Gospel” is in will improve greatly in the future (if you want to consider these developments as “improvements”). In a presentation in 2023, Palantir showed a ChatGPT-like bot that takes a human operator from notification of a situation and identification of the target, strategic decision making to “passing the options up the chain of command” for the final decision to fire leads. In this scenario, two people remain in the decision-making chain: the operator who interacts with the AI ​​and the commander. Formally, Palantir remains within the limits set by the UN resolution for autonomous weapon systems. For now.

See also  Samsung Galaxy A55 in the test: Large battery, top OLED display & 5 years of updates

In the Spring 2023 issue of the US Army College Quarterly, Robert J. Sparrow and Adam Henschke replaced the framing of future war operations known as “Centaur Warfighting” — according to which the future of war lies in “human-machine hybrids” and where the Centaur Approach refers to the mythological creature that is half human and half horse — through what is known as “Minotaur Framing,” which refers to the mythological creature with the head of a bull and a human body, and in which AI systems effectively use the head of future war operations, leading and leading all decisions in all phases of combat, which at this point in AI development seems more likely, at least in the medium term, than scifi-like human-robot hybrids. Data analysis beats Terminator.

Two weeks ago, OpenAI lifted its ban on using ChatGPT for “military and warfare” and announced that it was working with the Pentagon on “cybersecurity tools.” It’s clear that the darlings of generative AI want to play in the Pentagon’s wargames, and I’m convinced they’re not the only ones. With more and more hot international conflicts, from Israel’s war against Hamas after the October 7 massacre, to Russia’s invasion of Ukraine, to local conflicts with the Houthis, as well as competitive pressure from China, which is certainly already developing its own versions of AI guided automated weapons systems, I think automated war pipelines are in high demand by many international players with very deep pockets, and Silicon Valley seems more than ready to take advantage of that.

So the AI ​​arms race of AI corporations is actually a competition to become the leading corporate component of the machine head of the new Minotaur.

If you still have doubts about how the AI ​​industry in Silicon Valley will make the money necessary for its “fun” and exploitative office and marketing toys for office workers and the skyrocketing energy and computing costs:

It’s right here, in the nascent AI military complex. And I have a very bad feeling about it.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy