By Security Television Network, Author: by Kris Osborn, Warrior Maven

Click here for updates on this story

September 18, 2021 (Security Television Network) – (Washington, DC) The Army’s “Project Convergence”, the Air Force’s “Advanced Combat Management System” and the Navy’s “Project Overmatch” are the names that each service gives to an AI and its autonomy. An activated network of intertwined “mesh” nodes operating in a large multi-domain warfare environment.

The defining concept, or the strategic impetus for each of these respective efforts, is the clear and fundamental current technological modernization efforts, as they are based on the principle that any combat platform or “node”, qu ‘it is a fighter plane, a tank, a ground control station. or surface ship, can function not only as its own combat-capable “entity”, but also as a critical “node” of surveillance and warfare information, capable of collecting, processing, organizing and transmitting time-sensitive data through great force in real time.

For example, instead of having to send images via an individual video stream to a ground control center, an advanced surveillance drone could find crucial enemy targets, analyze a multitude of otherwise disconnected but relevant variables, and send a new one. time. sensitive intelligence information to multiple locations across the force in seconds.

Pentagon Joint Artificial Intelligence Center (JAIC)

Each of these efforts may have their own name, but they are fundamentally based on a common tactical and strategic approach. The merging of these respective service efforts into a coordinated, highly efficient, high-speed multiservice war machine is now overwhelmingly underlined by the Pentagon’s Joint Artificial Intelligence Center (JAIC).

So I think, you know, we’re key partners, you know, with the two, you know, the ABMS series of exercises, with Project Convergence and we’re also working very closely with the Navy on their Project Overmatch ” , said the lieutenant-general. Michael Groen, director of the Joint Artificial Intelligence Center, told reporters according to a Pentagon transcript.

This is already happening and new efforts are rapidly gaining ground as all departments move towards massive “sensor to shooter” time acceleration in the context of a multiservice “web” attack. The more fast enemy targets can be seen and evaluated against surrounding terrain, incoming enemy fire, specifics of navigation, and the most optimal mode of attack under these circumstances, the more attacking force can prevail in combat.

This is both obvious and crucial. Entering or getting ahead of an enemy’s decision-making cycle is, in layman’s terms, the critical factor determining victory in modern warfare.

Combine AI data analysis and kinetic military action

At the same time, this technical architecture is only as effective as long-range, high-resolution sensors and precision-guided weapons capable of destroying them. Yet such sensors and weapons can be of little use if highly relevant target information or moments are not found and identified in time to be destroyed.

Information-based detection and AI-enabled data analysis are then, by design, merged with so-called “kinetic” options such as missiles, rockets, rifles, bombs and other weapons. to complete the chain of destruction in front of an enemy. For the military, this may start with an unmanned, unmanned team in which operational mini-drones transmit the specifics of the sensors to a larger drone which then feeds the data to an AI capable system known as the Firestorm which, in seconds, combines threat or target data provided by a “sensor” with the optimal attack method, or “gunner”.

Starting with a mini-drone or SATCOM network miles away, a process of finding, identifying and destroying an enemy with a ground combat vehicle, helicopter, or even a dismounted force can be reduced. from 20 minutes… to 20 seconds.

The idea with Air Force ABMS is similar in scope and application, as it involves the collection, processing and dissemination of crucial combat data between satellites, bombers, fighter jets, drones and even ground attack systems.

In an Air Force test circumstance, referred to as the ABMS ‘exit ramp,’ a 155mm ground artillery weapon achieved an unprecedented breakthrough by tracking down and destroying a high-maneuvering cruise missile. speed.

Army AI Capability, ”is a disturbing term used by Pentagon leaders to explain the serious and growing risks presented by technologically advanced adversaries increasingly capable of building lethal robotic AI-enabled weapon systems, without constraints of ethics, moral consideration or human decision-making.

The concern centers on a single issue, as countries like Russia and China operate capable and rapidly evolving robots, drones and weapon systems, potentially detached from human decision-making.

Citing the possibility that “an authoritarian regime like Russia” could develop an armed AI capability, the commander of the Pentagon’s Joint Center for Artificial Intelligence, Lt. Gen. Michael Groen said U.S. and friendly forces may not be able to use comparable capacities. , especially in the event that a potential adversary has attacked with AI weapons in the absence of ethical or humanitarian concerns.

Could this put American forces at a disadvantage? Of course, especially given a scenario regarding decisions regarding the use of lethal force since, according to Pentagon doctrine, a human should always be “in the know.”

Army AI: United States and Allies

However, Groen appeared to suggest that advanced weapon developers, scientists, and futurists are now working with an international group of like-minded allies interested in addressing ethical concerns while prevailing in combat. Part of this concerns doctrinal and technological development, it seems, which might seek to balance, orient, or integrate the best of technical capabilities with optimal tactics sufficient to repel an enemy AI-led attack.

“We believe that we are gaining tempo, speed and capacity by bringing the principles and ethics of AI from the start, that we are not alone in this case. We currently have a Defense AI partnership with 16 countries that everyone who embraces the same set of ethical principles has come together to help each other think through and figure out how to actually develop AI in this build. Groen told reporters, according to a Pentagon transcript.

Referring to what he called an “ethical baseline,” Groen said there are ways to design and use highly effective, AI-compatible weapons within established ethical parameters.

One of these possibilities, currently being considered by scientists, engineers and weapons developers at the Pentagon, is to design artificial intelligence systems capable of instantly employing “defensive” or “non-lethal” attacks against human beings. non-human targets such as incoming anti-ships. missiles, mortars, rockets or drones.

Another interesting nuance to this is that, given the pace and efficiency of the procedures with which sensors, fire control, weapon systems, and data analysis can operate, rapid human decision-making is not possible. Not necessarily always “slows down” a decision about whether to attack.

Advanced AI analysis can, for example, compare data about an incoming attack with a large historical database and instantly determine which course of action might best respond to the situation. This would happen because advanced algorithms could draw on a historical database comparing how particular attack circumstances have been handled in the past, against a multitude of impacting variables such as weather, weather, etc. terrain, scope, and available ‘effectors’.

Done this way, which was the case during the Army’s Exercise Project Convergence last fall, the chain of destruction can be completed in seconds with human decision makers still operating “in the loop”. .

“If we can make good decisions and have informed decision makers, we believe this is the most important application of artificial intelligence. And then we’ll continue from there to other functions. And the list is endless of all, you know, moving logistics successfully on the battlefield, understanding what’s going on based on historical patterns and precedents, understanding the implications of weather or terrain, on maneuvers. , all of these things can be assisted by AI, ”Groen.

Note: this content is subject to a strict embargo in the local market. If you share the same market as the contributor of this article, you cannot use it on any platform.

Dr James [email protected] (202) 607-2421

Leave a Reply

Your email address will not be published.