The US military wants to protect AI robots from “hostile attacks”
The US Department of Defense is launching technology that will allow artificial intelligence-powered killer machines to stay in control on the field combat due to visual “noise” misleading the robots. The Pentagon's Office of Innovation wants to protect artificial intelligence systems from «hostile attacks.» The study examines how visual “noise” leads to fatal errors in AI identification.
Pentagon officials have sounded the alarm about «unique classes of vulnerabilities for artificial intelligence or autonomous systems» that, they hope new research can eliminate.
According to the Daily Mail, the program, called Guaranteing AI Robustness against Deception (GARD), from 2022 is tasked with determining how visual data or other electronic signals entering an artificial intelligence system can be changed using a calculated introduction of noise.
Computer scientists at defense contractor GARD have been experimenting with kaleidoscopic patches designed to trick artificial intelligence systems into creating fake IDs.
“Essentially, by adding noise to an image or sensor, you can disrupt a machine learning algorithm,” one senior Pentagon official who led the research recently explained.
The news, the Daily Mail notes, comes as Fears that the Pentagon is «building killer robots in the basement» have allegedly led to stricter artificial intelligence rules for the US military, requiring all systems to be approved before deployment.
«Knowing this algorithm, you can also sometimes create physically feasible attacks,» added Matt Turek, deputy director of the Information Innovation Office at the Defense Advanced Research Projects Agency (DARPA).
It is technically possible to «trick» an AI algorithm by causing it to make critical mistakes — causing the AI to incorrectly identify various patterned patches or stickers for a real physical object that is not actually there.
For example, a bus full of civilians could be mistakenly identified by the AI as a tank if it were tagged with the right «visual noise,» as one national security reporter working for the website ClearanceJobs suggested as an example.
In a nutshell In other words, such cheap and lightweight «making noise» tactics can cause vital military AI to mistake enemy fighters for allies and vice versa during a critical mission.
Researchers in the modestly budgeted GARD program have spent $51,000 studying surveillance and signal jamming tactics since 2022, Pentagon audits show.
Their work was published in a 2019 and 2020 study illustrating how visual noise that may appear merely decorative or inconsequential to the human eye, such as a 1990s Magic Eye poster, may be interpreted by artificial intelligence as a solid object.
Computer scientists at defense contractor MITER Corporation managed to create visual noise that artificial intelligence mistook for apples on a grocery store shelf, a bag left on the street, and even people.
“Whether it's physical attacks or noise patterns that are added to artificial intelligence systems,” Turek said Wednesday, “the GARD program has created state-of-the-art defenses against them.
«Some of these tools and capabilities were provided to the CDAO [Chief Digital and Artificial Intelligence Office],» Turek reports.
The Pentagon created the CDAO in 2022; it serves as a hub to facilitate faster adoption of artificial intelligence and related machine learning technologies in the military.
The Department of Defense recently updated its rules on artificial intelligence despite «much confusion» about how it plans to use machines that make autonomous decisions on the battlefield, according to Deputy Assistant Secretary of Defense for Force Development and New Capabilities Michael Horowitz. .
Horowitz explained at an event in January of this year that «the directive does not prohibit the development of any artificial intelligence systems,» but will «clarify what is and is not permitted» and support a «commitment to responsible behavior.» while developing lethal autonomous systems.
While the Pentagon believes the changes should reassure the public, some said they were «unconvinced» by the efforts, the Daily Mail noted.
Mark Brakel, director of the advocacy group Future of Life Institute (FLI), told DailyMail.com in January this year: «These weapons carry a huge risk of unintentional escalation.»
He explained that artificial weapons intelligence can misinterpret something, such as a ray of sunlight, and perceive it as a threat, thus attacking foreign powers without reason and without deliberate hostile «visual noise.»
Brackel said the result could be devastating, because «without real human control, AI-powered weapons are like the Norwegian missile incident, close to nuclear Armageddon on steroids, and they could increase the risk of incidents in hotspots such as the Taiwan Strait.»
The US Department of Defense is pushing to modernize its arsenal with autonomous drones, tanks and other weapons that select and attack targets without human intervention.

