Home » The United States used Artificial Intelligence to carry out airstrikes in the Middle East

The United States used Artificial Intelligence to carry out airstrikes in the Middle East

by admin
The United States used Artificial Intelligence to carry out airstrikes in the Middle East

The United States used artificial intelligence to identify targets hit by airstrikes in the Middle East this month, a defense official said as he revealed the military’s growing use of the technology for combat.

Machine learning algorithms that can learning to identify objects helped define objectives of more than 85 US airstrikes on February 2, according to Schuyler Moore, chief technology officer of US Central Command, which directs US military operations in the Middle East. The Pentagon said those strikes were carried out by US bombers and fighter jets against seven facilities in Iraq and Syria.

Start your day well informed with the #CincoCosas newsletter. Subscribe here.

“We’ve been using computer vision to identify where there might be threats,” Moore said in an interview with Bloomberg News. “We certainly have had more opportunities to strike in the last 60 to 90 days,” she said, adding that The US is currently seeking “an enormous number” of rocket launchers of hostile forces in the region.

Artificial intelligence of machines could reveal itself against humanity according to an expert

The Army has previously recognized the use of computer vision algorithms for intelligence purposes. But Moore’s comments represent the strongest confirmation in the public domain that the US is using the technology to identify enemy targets that are subsequently hit by weapons fire.

The U.S. strikes, which the Pentagon said destroyed or damaged rockets, missiles, drone storage and militia operations centers, among other targets, were part of the Biden administration’s response to the deaths of three US service members in an attack carried out on January 28 against a base in Jordan. The US blamed the attack on Iranian-backed militias.

Moore said that artificial intelligence systems also have helped identify rocket launchers in Yemen and surface ships in the Red Sea, several of which Central Command, or Centcom, said it destroyed in multiple weapons strikes during February. Iran-backed Houthi militias in Yemen have repeatedly attacked commercial ships transiting the Red Sea with rockets.

See also  Wu Lei responded to holding Zhao Lusi with one hand, exposing that his hair was unbearable and persuaded everyone to keep healthy jqknews

Maven Project

The targeting algorithms were developed within the framework of the Project Maven, a Pentagon initiative started in 2017 to accelerate the adoption of AI and machine learning across the Department of Defense and to support defense intelligence with an emphasis on prototypes for the current US fight against Islamic State militants.

Moore, who works at Centcom headquarters in Tampa, Florida, said U.S. forces in the Middle East have experimented with computer vision algorithms that can locate and identify targets from images captured by satellite and other data sources, those that were tested in exercises during the past year.

Then they started using them in real operations after the October 7 attack by Hamas on Israel and the retaliatory military action that followed in Gaza, which stoked regional tensions and attacks by Iranian-backed militants.

“On October 7th everything changed,” Moore said. “We immediately accelerated at full speed and “A much higher operational tempo than what we had previously”he said, adding that U.S. forces were able to make “a pretty seamless shift” toward using Maven after a year of digital exercises.

Moore emphasized that Maven’s AI capabilities are being used to help find potential targets, but not to verify or deploy weapons against them.

He said exercises late last year, in which Centcom experimented with an AI recommendation engine, showed that such systems often required human interaction to propose the order of attack or the best weapon to use.

This is why humans constantly review AI guidance recommendations, he said. US operators take their responsibilities and risk seriously That AI can make mistakes, he said, and “it tends to be pretty obvious when something is wrong.”

See also  Fair wages and income distribution

“There is never an algorithm that simply runs, comes to a conclusion and then moves on to the next step,” he explained. “Every step that involves AI has a human control at the end.”

Translated by Paulina Steffens.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy