Home » [Current Affairs and Military]Can Autonomous Armed Terminator Robots Enter the Battlefield | Armed Robots | Artificial Intelligence | Sensor Networks

[Current Affairs and Military]Can Autonomous Armed Terminator Robots Enter the Battlefield | Armed Robots | Artificial Intelligence | Sensor Networks

by admin
[Current Affairs and Military]Can Autonomous Armed Terminator Robots Enter the Battlefield | Armed Robots | Artificial Intelligence | Sensor Networks

[The Epoch Times, December 18, 2022]In today’s society, weapons and the role of the army are given a deeper meaning than killing. Strong military power is often used as a deterrent to maintain world peace and human security. The war, though becoming covert, never ceased.[Current Affairs and Military Affairs]Take you to the forefront to see the details and truth of the struggle between good and evil.

The latest weapons and equipment have brought profound changes to today’s battlefield, and the speed and scale of the attack is unprecedented. For example, a group of attacking drones may suddenly appear, and continuous missile attacks exceeding five times the speed of sound may make it impossible for human decision makers to respond quickly enough. Military commanders may simply not have the opportunity to conduct decision analysis and determine the best course of action. Moreover, the weapon operator may not have time to establish a kill chain for the high-speed simultaneous attack target when receiving the order. The root cause is that the time is too short.

However, all of this can be mitigated or resolved with the participation of artificial intelligence, which leads to the question of what role artificial intelligence plays in the overall weapon system. The rapid maturity and wide-ranging application of artificial intelligence in military technology, weapons and high-speed computing, its fast, accurate and ruthless action force, without any human intervention, seems to be able to solve the time pressure created by any sudden attack, achieve the purpose of effective defense.

Many people will think, why don’t we create an armed robot like “Terminator” as soon as possible, which can find, track, target and destroy targets autonomously without human participation? In fact, to some extent, this technology already exists. Technology is not the key obstacle. Factors limiting the development of autonomous offensive robots include a complex set of conceptual, philosophical, and policy issues beyond technology.

Remotely operated armed robots already exist and have even been sent to the battlefield for many years, however, all of these weapon systems are remotely controlled by humans and no machine can make any decisions or judgments on the use of lethal force on its own. At least none of the U.S. military’s existing weapons systems cross that line. The Pentagon’s theory is that when it comes to making decisions about the use of deadly force, someone must always be in the loop.

A complete weapon system, such as a cannon, includes sensors to detect and discover enemy targets, identify and judge targets, track and lock targets, select and load ammunition, activate and guide and control ammunition to hit targets. Now, this process has extended from an independent artillery to a huge combat system. It not only obtains target information from the fire control radar of the artillery itself, but also obtains target information from a larger battlefield information system formed by various sensors. The target data assigned to this artillery is obtained from the target database of the gun, and then the shooting is completed. That’s a complete timeline from sensor to shooter. The entire control process may be completed by artificial intelligence, but in the key link of whether to fire or not, it is still up to humans to decide.

See also  Hill record! Forfang sails to victory in Willingen

Back in 2009, the U.S. Army was developing an armed robot called the Multipurpose Logistics and Equipment Vehicle (MULE). The platform is a 10-foot-square robot armed with Javelin anti-tank missiles. At that time, the robot can use the autonomous navigation system to track, find, target and destroy the enemy tank without human intervention to carry out the task.

Currently, the U.S. Army is developing a number of paradigm-changing weapon systems, including lasers, hypersonic missiles, robotics and precision weapons.

The U.S. Army has been working on real-time networking of forces across domains for decades, dating back to the Army’s broad study of “future combat systems” in the early 2000s. But the ability to truly network troops in real time and organize information to quickly link sensors to shooters has never been fully realized, until now.

The U.S. Army has already demonstrated groundbreaking ways to use artificial intelligence to massively reduce sensor-to-shooter time from 20 minutes to 20 seconds, some of which were achieved in the Army’s Project Convergence, which began in 2020.

Fusion projects have been pushing to shorten the sensor-to-shooter timeline to win the war by making critical, time-sensitive decisions ahead of the adversary’s decision cycle.

Using high-speed, artificial intelligence and a mesh network of sensors to shorten the sensor-to-shooter pairing process from 20 minutes to 20 seconds is truly a paradigm-changing breakthrough, the key to which is a computer-advanced algorithm with artificial intelligence capabilities.

For example, a small, dispersed, multi-domain networked force with long-range sensors and high-speed data processing capabilities, perhaps an artillery force, would be capable of a level of lethality never before seen. Using a tiny drone, they can identify a target and transmit that data within seconds to an AI-powered database, instantly establishing the best attack plan to destroy the target before the adversary can react.

That’s what the U.S. Army is pushing forward with, an artificial intelligence project called Rainmaker, a technology that aims to integrate sensor data through a specific architecture for interoperability and data sharing . Raincaller, which integrates sensor data, works with an AI-assisted decision-making system called FIRESTORM, which pairs sensor data with shooters or weapons systems to help commanders make near-instant decisions.

See also  Tekken 8 ci mostra Kazuya

The combination of the two has helped the U.S. Army build a data space that allows artificial intelligence to run from sensors to weapons systems. On the battlefield of the future, data is almost everything. It is the eyes, ears and even the brain of the troops. It helps the troops determine goals and make decisions. It can handle various things. The rain caller above these data creates a language environment for dialogue between the army and the industry. Any ideas, needs, tactical and technical requirements of the army can make the industry use them in real terms on this basis. The form presents itself on the battlefield.

So far, the decision-making power of these systems has been in the hands of humans, and the development of technology has not touched the sensitive issue of excluding humans from the loop of weapon system operation. Even in a human loop, artificial intelligence is improving the response speed of weapon systems unprecedentedly, and the US military still has huge room for development in this area. The Pentagon is not going to change the principle of morality and morality at this time to exclude humans from the loop of lethal weapons systems. But America’s opponents may not adhere to moral principles on this issue. When the two systems have differences in confrontation capabilities due to the presence or absence of human decision-making links, the Pentagon may seriously consider this issue. That is a matter for the future.

Now, even for non-lethal weapons, the Pentagon has not decided to hand them all over to artificial intelligence. Although AI decision-making, analysis, and data organization are maturing at an unprecedented rate, autonomy in AI remains a very cutting-edge problem. Essentially, would an AI system, when aggregating and analyzing disparate sensor data, be able to accurately tell the difference between lethal and nonlethal force? Whether AI interceptors can be used to defend against drones and missiles, a seemingly non-issue, could be a big one.

Marc Pelini, head of the Pentagon’s Joint Counter-UAV Systems Office, said that right now we don’t have the power to take humans out of the ring. Under existing DoD policy, you have to have a human being authorized to participate at some point in the decision cycle.

See also  flooded stadium of freedom in the new city | Sports

nonetheless, the Pentagon has explored the possibility of weapons and AI autonomy loops in the defensive and nonlethal arenas, that is, whether AI-supported or autonomous weapon systems could be allowed to fire or use force in nonlethal situations , such issues are within the scope of the Pentagon’s discussion. The reason is that advanced algorithms can bring the precision of artificial intelligence and the fidelity of analysis to be believed, and they are now able to process, interpret and analyze vast amounts of disparate data to determine which response might be the best for a given threat or best fit. Its implications are that if U.S. forces are under attack, the window of time to defend forces or deploy countermeasures could be exponentially shortened, and the application of autonomous artificial intelligence could potentially save lives in warfare.

The promise and performance of artificial intelligence systems are improving rapidly, but there are still too many ineffable or incomputable properties and qualities associated with human cognition that numbers and logic-oriented machines cannot accurately capture. There are some subjective phenomena unique to human cognition that machines seem unable to replicate. Such as emotion, intuition, imagination, certain conceptual nuances and ambiguities in language, etc.

In summary, we want to answer an important question, namely, if Terminator-type robots can be used in warfare, can we do so? Of course, the Pentagon views these technical issues through an ethical and moral lens, but there is certainly no guarantee that America’s adversaries, such as China, will do the same. As a result, the U.S. military may need to prepare to fight autonomous robots, which is why when it comes to modern joint military exercises, the U.S. military puts a lot of emphasis on range, networking, and the use of unmanned systems.

The Pentagon has embraced artificial intelligence, but has not allowed any weapon system, whether lethal, nonlethal, offensive, or defensive, to attack autonomously outside of human decision-making. Although some issues are being discussed, they do not cross the boundaries of ethics and morality. The bottom line that the United States adheres to has not collapsed because of the opponent’s lack of bottom line.

Written by: Xia Luoshan (a reporter from The Epoch Times, who has experienced more than ten years of military life, mainly engaged in military teaching and some technical management work)
Production: Current Affairs Military Production Team
Pay attention to “Current Affairs Military-Charlotte Mountain”: https://www.ganjing.com/zh-TW/channel/1f6pro4fi585ppZp9ySKkwd0W19f0c

Editor in charge: Lian Shuhua

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy