On Thursday night, at the Paris Hotel in Las Vegas, Darpa held a $55 million hacking contest open only to bots. After the contest was underway, as these bots began hunting for security bugs planted inside seven supercomputers perched atop the ballroom stage, the agency revealed that some of these bugs were inspired by Internet history. It had planted security holes akin to 2014’s Heartbleed and the bug exploited by the 2003 SQL Slammer worm and the rather subtle and complex Crackaddr bug, also 2003.
Yan Shoshitaishvili says he should have seen this coming. But he didn’t. He and his fellow researchers from the University of California, Santa Barbara built one of the bots that competed in this hacking contest. Their creation is called Mechaphish, and they didn’t think to prepare it for famous bugs from the past. Other hacking contests have included historical bugs, Shoshitaishvili says, and his team should have designed their bot so that it could instantly recognize things like Heartbleed. During a live interview not long after the contest began, as several thousand spectators looked on, he chastised himself for not being smarter.
And yet, as the rather extravagant contest progressed, its color commentators—yes, color commentators—revealed that Mechaphish had found and exploited several of these bugs, including the enormously complicated Crackaddr bug. That very much surprised contest officials and competitors. This type of exploit is “really, really difficult,” says David Brumley, a Carnegie Mellon computer scientist who oversaw the team that won the contest.
It was the key moment in the Cyber Grand Challenge, the first hacking contest to pit bot against bot, rather than human against human. In exploiting the Crackaddr bug, Mechaphish demonstrated just how well these bots can perform without even a hint of human assistance. Shoshitaishvili and his team had no way of knowing what software Darpa would run on the seven supercomputers taking part. And they hadn’t guessed that historical bugs would be in the mix. But Mechaphish still cracked CrackAddr.
Mechaphish and its fellow bug-hunting bots can’t do everything human hackers can do. Far from it. “Humans are still great at the creative aspects,” Brumley says, “thinking outside the box.” But they can help human hackers fill in the gaps. Following the Cyber Grand Challenge, Shoshitaishvili and his team also competed in Capture the Flag, a hacking contest for humans, and they brought Mechaphish along. It adds another tool to their toolbox. It can do the small things quicker than they can. It might even handle something they’ve forgotten to handle.
That is the thing about automated systems. They can operate on their own, but they can also complement what we humans do. They can work in tandem with their creators. They can spur us to new heights. We saw this with AlphaGo, the artificially intelligent Google machine that plays one of the most complex games ever devised. It beat a grandmaster at the ancient game of Go, but it also showed the grandmaster new ways of playing the ancient game.
The Darpa View
This is how Dapra, the Defense Department’s visionary research arm, views the ultimate impact of its Cyber Grand Challenge: The automated bots spawned by the contest won’t replace human hackers anytime soon. But they’ll provide human hackers with new tools. “We won’t go in one fell swoop to fully automated network defense. But think about how powerful it will be for humans to leverage these kinds of machine tools,” says Darpa chief Arati Prabhakar. “When humans start being able to do things they’ve never thought of before. That’s when it gets really interesting.”
On some level, this is already starting to happen. Mayhem, the Carnegie Mellon bot that won the Cyber Grand Challenge, went on to compete in Capture The Flag, challenging the human teams entirely on its own. It finished the first day in last place, but it spent at least part of the contest ahead of one human team. Separately, a team of Carnegie Mellon researchers competed in Capture the Flag with help from Mayhem, and they won the contest.
What we’ve seen from bots like this is that they can locate and patch simpler bugs far faster than humans. They can deal with the volume while humans deal with the difficult stuff. And in the modern age, as computing devices and online services proliferate across our daily lives, it’s the volume that’s the coming problem. “Can we build security techniques that have the potential to deal with massive scale?” Prabhakar says. “That’s what you need if you’re going to grow faster than the threat is growing.”
The Coming AI
But that’s not the only coming problem. We live in a time when artificial intelligence is on the rise. This creates new security holes. And it can eventually create ways of attacking security holes. That means we need new ways of finding and defending these holes as well.
The bots that competed in the Cyber Grand Challenge don’t use that much machine learning, the breed of AI that is so rapidly reinventing other parts of the digital world. But Darpa believes this is how security bots will surely progress—and then some. “There are even other types of AI that must be developed beyond statistical learning,” says John Launchbury, the director of Darpa’s information innovation office. “There is still a huge path ahead of us.”
It makes good sense. A bit like Yan Shoshitaishvili before the Cyber Grand Challenge, we don’t quite know what is coming. And we may need a little help when it does.