An Artificial Intelligence Program Beat an F-16 Pilot in a Dogfight … Five Times in a Row

The virtual contest raises the stakes for weaponized AI

Two F-16 fighter aircraft flying in formation.
Two F-16 fighter aircraft flying in formation.
CT757fan via Getty Images

While college sports remain in a state of flux due to the pandemic, one of the most consequential games of the year recently took place at the Johns Hopkins Applied Physics Laboratory in Maryland. Dubbed the AlphaDogfight Trials, the contest pitted artificial intelligence algorithms against each other in a virtual dogfight where each AI controlled a simulated F-16 fighter plane.

On August 20, the winning AI from Heron Systems went head-to-head against a real Air Force fighter pilot who was using the call sign “Banger.” The event was streamed live on YouTube, and anyone watching that day saw one of the clearest warning signs for weaponized AI; the AI program beat the human pilot 5-0. 

You can watch the final contest on YouTube at DARPAtv, with the introduction to the five rounds starting at 4:39:12. 

Over at Wired, Will Knight has the full story behind the AlphaDogfight Trials, which were put on by the DOD’s Defense Advanced Research Projects Agency (DARPA), as well as the larger ramifications posed by this experiment. 

“The AlphaDogfight contest … shows the potential for AI to take on mission-critical military tasks that were once exclusively done by humans,” Wired explained. “It might be impossible to write a conventional computer program with the skill and adaptability of a trained fighter pilot, but an AI program can acquire such abilities through machine learning.”

The problems with this technology leave the U.S. military and tech industry in a catch-22. Developing artificial intelligence for military use that can kill without the authorization of human personnel is highly unethical — as Wired noted, military leaders “have no desire to let machines make life-and-death decisions on the battlefield.” On the other hand, as Brett Darcey, vice president of Heron Systems, the California defense company that won DARPA’s contest, explained, “If the United States doesn’t adopt these technologies somebody else will.”

For years we’ve seen the tech community grapple with the ethics of working with the military, including high-profile protests at companies like Google. But as the AlphaDogfight Trials show, this technology is coming whether we want it or not. The question now finds itself in the realm of nuclear weapons: can it be regulated? 

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.