The Department of Defense’s Defense Advanced Research Projects Agency (DARPA) held a challenge August 4, which pitted machines against machines in a quest to automatically discover software flaws.

The Cyber Grand Challenge (CGC) started with more than 100 teams composed of what DARPA called “some of the top security researchers and hackers in the world,” with a finale of seven finalists who slugged it out for nearly twelve hours with their specially engineered systems.

CGC challenged these experts to have their systems compete against each other to evaluate software, test for vulnerabilities, generate security patches and apply them to protected computers on a network.

The bots played capture the flag amongst themselves, a game usually played intensely by human hackers to find, diagnose and fix software flaws in real time in a simulated adversarial environment.

In just over 8 hours of computation and 96 rounds of about 270 seconds each, the machines authored 421 replacement binaries, or new native code, that was more secure than the original.

They also authored 650 unique proofs of vulnerability, or attempts to navigate the maze of inputs accepted by the software, and proved the software under analysis was vulnerable, said DARPA.

First-prize winner was ForAllSecure, a startup founded by a team of computer security researchers from Pittsburgh. The company, whose bot is called Mayhem, won a $2 million prize for placing first. They said their technology is the result of more than a decade of program analysis research at Carnegie Mellon University.

Second-prize winner was TECHx, a team of software analysis experts from GrammaTech Inc., a developer of software assurance tools and advanced cybersecurity solutions, and the University of Virginia in Charlottesville, who won $1 million. Their bot was called Xandra.

Shellphish, a group of computer science graduate students at the University of California-Santa Barbara, won $750,000 as the third-place winner, and their bot was called Mechanical Phish.

“Tonight, completely autonomous systems played in an expert contest. In 2013 no such system existed and tonight seven of them played at a very high level,” DARPA CGC Program Manager Mike Walker said at a press briefing immediately after the challenge.

“We have redefined what is possible and we did it in the course of hours with autonomous systems that we challenged the world to build,” he added.

“With self-driving cars I think you saw LIDAR, computer vision, machine learning, imaging, sensing and onboard computing all fused into a prototype, and it’s very difficult to know before a prototype exists what the correct prototyping approach is,” Walker explained.

“That’s kind of where we were with the idea of machines being able to do fundamental computer security tasks in 2013. All these technologies for studying programs — everything from formal methods and automated mathematics to search and Monte Carlo input-generation techniques like fuzzing, directed fuzzing, dynamic analysis, sandboxing, the healing of execution divergence — all these things were research papers had been published that said we can automate this to better inform the analyst,” he added.

Future Projects

Cybersecurity is not only important, but crucial to DARPA’s ambitions, according to DRAPA Director, Arati Prabhakar.

“Our mission is to change what’s possible so that we can take huge strides forward in our national security capabilities. We’re going to challenge teams to build radio networks with embedded artificial intelligence that will allow each of those radio networks to dynamically scan and form hypotheses and predict what’s happening in radio spectrum,” she explained.

By competing and collaborating with each other, the AI networks can dramatically advance the amount of capacity available from a fixed amount of spectrum, said the director.

Crowdsourcing is the way to go, if the recent moves by Apple, Uber and Fiat Chrysler are any indication. These companies have turned to so-called bug bounty programs where independent researchers help them find crucial security bugs in their products.

In the end, the CGC validated the concept of automated cyber defense, bridging the gap between the best security software and cutting-edge program analysis research.