It’s official: The robots are taking over.
Not quite. But in a significant development on August 20, an artificial intelligence (AI) program managed to defeat a human F-16 pilot in simulated dogfights. The AI program, designed by tech firm Heron Systems, was pitched against the human pilot in an environment resembling an elaborate video game during the third and final event of the AlphaDogfight trials organized by U.S. Defense Advanced Research Projects Agency (DARPA).
Heron System’s website notes that the program was based on deep reinforcement learning – an AI technique that combines insights from behavioral psychology with how the human cortex is structured and functions – along with unspecified innovations. The bested human operator, publicly known only by their callsign, “Banger,” was reported to have been trained at the Weapons School at Nellis Air Force Base in Nevada.
Eight teams, including one from Lockheed Martin, were selected to participate in the DARPA trials last August. The Agency notes that the aim behind the trials was “to demonstrate advanced AI algorithms capable of performing simulated within-visual-range air combat maneuvering.” The trials were part of an attempt by DARPA to enthuse and bring in more AI developers for its Air Combat Evolution (ACE) program, it has noted.
The principal aim of ACE is to develop autonomy for unmanned aerial vehicles (UAVs) that would enable them to engage in tactical aerial maneuvers within a prescribed strategy provided by a single human “commanding” pilot in an accompanying platform.
Simply put, in the scenario envisaged by DARPA for deployment of the kind of program Heron Systems has developed onboard UAVs, a pilot in the air would have multiple smaller armed UAVs capable of operating independently at her disposal. This would increase the lethality of the single operator, ACE assumes. Plainly put, this is about armed drones capable of dogfights on their own, within the parameters of the larger engagement set by an airborne human pilot.
Summing up the utility of AI, the ACE head, Air Force Colonel Dan Javorsek, noted after the trials: “The more that we can enable our unmanned systems to behave, look and act like intelligent, creative entities, the more that causes problems for our adversaries.”
Some experts rightly cautioned against overhyping this milestone. Hoover Institution’s Jacquelyn Schneider pointed out that AI systems typically perform better in a simulated environment, and that the human pilot was essentially playing a video game, outside the physical and psychological context real-world engagements provide.
Both China and Russia have made significant investments in developing lethal autonomous weapons systems, though globally there is very little consensus about the legal and ethical implications of their deployment. In February this year, the Pentagon adopted a set of guidelines around the ethical use of artificial intelligence.