David Moran was all set to go out that Saturday night. He thought he might hit Parliament House, Orlando’s oldest gay nightclub, or maybe make it over to Pulse, another mainstay. But after he and a friend ended their shift at the restaurant where they both worked, car trouble kept them marooned in the parking lot for an hour. So Moran went home and fell asleep watching Bob’s Burgers on Netflix instead.
He was awakened just before 5 am by the sound of his phone buzzing next to him on his bed. He fished it out from between the covers and found a text message asking if he had heard the news about Pulse. “Mass shooting,” said the message that arrived next. Now wide awake, Moran instinctively thumbed his way to Facebook.
His feed was already lit up with a barrage of messages and updates. He found frantic notes asking if he was OK. And like nearly everyone else who was up at that hour trying to make sense of what was happening at Pulse, he found the eight terrifying words that the nightclub had posted to its own Facebook page at 2:09 am: “Everyone get out of pulse and keep running.”
Moran, 33, had spent several nights a week at Pulse throughout his twenties, in the time just after he came out as gay. “It was my place,” he says. Now he could think of dozens of people he knew who might be there. So Moran dove in and joined the frenzied relay of messages. He toggled between Facebook and a local news station’s live feed, where at one point he saw his friend Drew Leinonen’s mother waiting outside a hospital in tears. After a while he found it unbearable to process so much chaos in a quiet apartment, so he walked the dark streets over to the nearby IHOP—the Gay Hop, people called it, because on a normal night the club crowd would take over the restaurant in the predawn hours.
Moran had just arrived at the diner when an unusual Facebook message arrived from his friend Marcus. “Are you OK?” it read. “It looks like you’re in the area affected by The Shooting in Orlando, Florida. Let friends know that you’re safe.” The message led Moran to a page with two buttons: a green one marked with the words “I’m Safe” and a white one that read “I’m Not in the Area.” Moran tapped the “I’m Safe” button and another message appeared, suggesting he reach out to other people and listing all his Facebook connections in the area. Moran invited scores of people to check in as safe. And then he found himself on a page headlined “The Shooting in Orlando Florida.” The words “Facebook Safety Check” hung just below.
Moran could only vaguely recall having heard of Safety Check before. But as friends began marking themselves as safe, he kept returning to the page. For other Pulse regulars too, that Safety Check portal became the source of news they cared about most. Alex Wall, a graduate of the University of Central Florida who had moved to Brooklyn, sat awake through the early morning hours glued to her Safety Check page in her New York apartment. Alex Schnier, an Orlando barista, was getting ready to work an early shift at a Disney World Starbucks; he obsessively refreshed his Safety Check page as he drove to the theme park that morning. “I didn’t care that I was on my phone while driving,” he says. “I needed to know.”
As it happened, that night last June was the first time Facebook had ever deployed its Safety Check tool for an event on American soil. The company debuted the service after the Nepal earthquake in the spring of 2015; since then its notifications have appeared in the feeds of more than a billion people worldwide, about 14 percent of the humans on earth. According to Patrick Meier, an expert on humanitarian crises and technology, Safety Check has already come to serve a fundamental need in disaster zones—giving people answers about the specific individuals they care about in a mass event—at a scale and speed that was never possible before. But Facebook is getting ready to turn Safety Check into something much bigger.
Facebook’s crisis hub promises to defragment the barrage of information that flies around and out of a disaster zone.
Think of the way Moran, Wall, and Schnier spent that Sunday morning, watching Safety Check as if it were a personalized breaking-news service—one whose flow of information was narrowly focused on the fate of their friends. In its next move Facebook is going to open the valve a little further. Safety Check product lead Katherine Woo says the company aims to fold the service into what it’s calling a crisis hub: a live, centralized repository for information and media about any given disaster, where people can not only check on the safety of individuals but also coordinate ways of responding in the physical world, follow news and chatter, and perhaps monitor all the live video pouring in from the scene. “All this happens on Facebook anyway,” Woo says. But soon it will be powerfully organized by the company’s algorithms into a single stream, automatically generated almost as soon as people start talking about a crisis.
This gets at a real problem. For years now, social media has been where people go to find out what’s happening during a crisis; even aid agencies and emergency managers have come to rely on hashtags and live video to form a picture of how an event is playing out on the ground. But the hail of updates can be rapid and incoherent. As disaster sociologist Jeannette Sutton points out, for example, there was no consistent hashtag to follow through the Boston Marathon bombings. Facebook’s crisis hub promises to defragment the barrage of information that flies around and out of a disaster zone.
Of course, sometimes there’s no information coming out of a disaster zone—because the internet has gone down, as happened in large parts of New York and New Jersey when Hurricane Sandy landed in 2012. This is another fundamental problem that Facebook is, almost by coincidence, working to solve. For the past two and a half years, the company has been developing a program to deliver the internet via drone to parts of the world that don’t have it. The business reason for this fanciful-sounding project is pretty straightforward: It will speed up Facebook’s efforts to expand globally and serve ads to even more people in what is already the world’s largest audience. But the team has always had the idea that the same technology could be vitally important in, say, an earthquake zone.
The upshot of all this is that Facebook—that place where you watch clips from American Ninja Warrior—is fast becoming one of the world’s most important emergency response institutions. This development has taken even the company itself somewhat by surprise. The story of how it happened is partly one of sheer scale—about 23 percent of the global population is on Facebook, and people naturally turn to the platform en masse when disaster strikes—and partly one of rapid, ad hoc, seat-of-the-pants adaptation. “In some cases,” Facebook CEO Mark Zuckerberg told me on a recent afternoon, sitting in the glass bubble of his conference room, “we don’t realize how useful things are going to be.”
When Typhoon Haiyan hit the Philippines in November 2013, Sharon Zeng was in Northern California, working for the Facebook payments team—the unit that fashions tools for shuttling money across the social network. In response, Zeng’s team built a service that collected Red Cross donations for typhoon relief. A while later, almost as an aside, her boss asked if there was anything else the company could have done.
Zeng didn’t have an answer, but the question stayed with her. The company was gearing up for its next hackathon—a quarterly tradition whereby employees load up on Chinese takeout and then stay up until all hours, working in small teams to build prototypes of software ideas that aren’t directly relevant to anyone’s day job. And Zeng needed an idea. At one point she remembered a makeshift disaster message board that a group of Tokyo Facebookers had built after the Japanese earthquake two years earlier, trying to accelerate communication among the more than 12.5 million people affected by the quake and the tsunami that followed. “What if you could do that for every disaster?” she thought.
Zeng wasn’t a coder, though; she needed someone who could actually hack the thing together, so she asked a Facebook ad engineer named Peter Cottle. “He was this guy who would say hi to me in the hallways,” she tells me—by way of explaining “how ad hoc some of this is.” Cottle agreed, and over the 72-hour hackathon he, Zeng, and a few other engineers built the very first version of Safety Check (they called it Crisis Center). Eventually the prototype reached Zuckerberg’s desk.
In October 2014, Facebook formally unveiled Safety Check under the aegis of a new division of the company called Social Good; it described the feature as “a simple and easy way to say you’re safe and check on others” during times of emergency. Over the next year, the team deployed the service a handful of times around the world, always during natural disasters: earthquakes in Afghanistan, Chile, and Nepal; Tropical Cyclone Pam in the South Pacific; Typhoon Ruby in the Philippines.
But then the company changed tack. In November 2015 the Social Good division turned the service on after a team of ISIS gunmen and suicide bombers in Paris attacked a string of cafés, a stadium, a music hall, and other public venues, leaving 130 people dead. It was the first time Facebook had deployed Safety Check for something other than a natural disaster—and the decision proved surprisingly controversial.
The system is driven by Facebook algorithms first and then by the choices and behavior of people on the ground.
Paris, as it happened, wasn’t the only city that saw attacks that weekend. A double suicide bombing had hit Beirut the previous day, killing 43 people. And on the same day as the Paris assault, 26 people were killed by a pair of bombs in Baghdad. The assaults on all three cities were carried out by ISIS. But only one event—the attack in Paris—instantly received massive amounts of attention worldwide. Because Facebook had turned on Safety Check in one city and not the other two, it was accused of being just another media organization showing its Western bias.
Zuckerberg publicly acknowledged the complaints, saying the company would “work hard to help people suffering in as many of these situations as we can.” This turned out not to be a hollow promise: On the night of the attack in Orlando, Facebook tested a new version of the service—one that didn’t rely on the case-by-case discretion of the company’s engineers.
This new incarnation of Safety Check begins with an algorithm that monitors an emergency newswire—a third-party program that aggregates information directly from police departments, weather services, and the like. Then another Safety Check algorithm begins looking for people in the area who are discussing the event on Facebook. If enough people are talking about the event, the system automatically sends those people messages inviting them to check in as safe—and asks them if they want to check the safety of other people as well. In other words, the system is driven by Facebook algorithms first, and then it’s driven by the choices and behavior—and white-knuckle worries—of people on the ground.
In Orlando, Facebook’s algorithms automatically turned on Safety Check at 3:47 am, 11 minutes before police officially announced that there had been a shooting at Pulse.
at the IHOP, David Moran kept monitoring the news on his phone. By now the police had identified the shooter as Omar Mateen, a Floridian who had pledged allegiance to ISIS. But what mattered most to Moran was that 183 of his friends had checked in as safe—each bringing a new wave of relief.
At the same time, the process of elimination focused his anxieties on the dwindling list of people he knew who hadn’t yet checked in. One of them was Drew Leinonen, the friend whose mother Moran had seen crying on video earlier that morning. Through the grapevine he heard that Leinonen’s boyfriend, Juan Guerrero, was in the hospital; but no one had heard from Leinonen.
He was someone everyone in the Orlando gay community seemed to know, a guy who was both sharp-witted and silly, who could talk foreign films as easily as he could dance through the night at a club. Wall, the former Pulse regular in Brooklyn, and Schnier, the one who was working at Disney World, had both tried to check on him as well. “He was the friendliest guy in the world,” Wall says. “We ran into him everywhere we went.”
Finally Moran decided to get out of IHOP and head over to the Orlando LGBT Center, a few blocks away, where he knew he’d find people in a similar state of mind. He had sat at the all-night diner for an hour and a half with a breakfast getting cold in front of him—pancakes, eggs, bacon. When he left, he took it to go but tossed the whole thing, uneaten, into a roadside trash can. At the LGBT Center, Moran sat down with a group in the corner trying to organize a response; in practice they did little more than juggle information on Facebook and other social media services. “They were trying to post updates about how to donate blood, how to get information about someone who was missing, how to help,” he says.
Facebook is cagey about what its crisis hubs will look like, but Facebook Live will play a part.
In the future, Facebook says, this is the kind of thing that will become easier to do through its new crisis hubs. To Sutton, the disaster sociologist, the idea is powerful because such a critical mass of aid agencies and people affected by crises is already using Facebook. Groups like the Red Cross already coordinate donations and relief efforts as best they can on the social network; at the same time, when disaster victims can grab just a few minutes on the internet, they often spend them on Facebook. The crisis hub will help channel them all into the same space. Facebook itself is cagey about what its crisis hubs will look like—as usual, it’s figuring things out as it goes along—but Woo does let on that Facebook Live will play a part.
In the beginning, Facebook Live was for celebrities—a way for people like Kevin Hart, Gordon Ramsay, and Deepak Chopra to send real-time video to their fans. Then in April, Facebook expanded the service to everyone. For a time, the medium’s most notable hit was a video of BuzzFeed employees blowing up a watermelon by wrapping it in rubber bands. But the real power of the service didn’t become clear until a couple of months later, when a woman named Diamond Reynolds turned on Facebook Live moments after police shot her boyfriend, Philando Castile, in Falcon Heights, a suburb of Saint Paul, Minnesota—letting the rest of the world watch as the scene played out. That was not something the company anticipated, says Fidji Simo, director of product for Facebook Live, but it now sees that kind of video—a raw feed from an unfolding event—as the way forward.
Disaster response professionals are already starting to use Facebook Live and other real-time video services to get “eyes on the ground” and decide where to send resources. “As a situational awareness tool, I think it’s absolutely huge,” says Don Campbell, an emergency manager in North Carolina. Compared to even the most strongly worded public advisory message, live video is a much more powerful way to warn members of the public away from danger. “You can hear a report that I-40 is closed,” Campbell says, “but until you see the giant picture of the sinkhole, it doesn’t really hit home for a lot of people.” News organizations have begun using the same services to cover fast-developing stories.
In a world where all those videofeeds are organized together into one of Facebook’s crisis hubs, they could become an even more powerful tool for emergency responders. It’s also likely these one-stop crisis hubs will further sideline the traditional media organizations that report on these kinds of events. But of course, all of that is moot if the internet goes dark.
Abhishek Tiwari, a Facebook engineer, is standing on the roof of a three-story building in Woodland Hills, California, in between Los Angeles and the coastal mountains of Malibu. The roof is flat, and beside him, a white bathtub-sized dish antenna sits atop a rotating robotic arm. Nearby there’s a small network of PCs, flat-panel displays, and other electronic gear. Facebook employees designed much of this stuff during hackathons, Tiwari says.
We can’t see it from here, but the dish antenna on this roof is locked in a staring contest with a sister antenna perched somewhere on one of those Malibu mountains, about 8 miles away. The two antennas are trading data at a rate of 19 Gbps, about 400 times faster than your home internet connection, an unprecedented speed for equipment so small, light, and energy-efficient.
To demonstrate how quickly and precisely the antennas can aim at each other, Tiwari tells another Facebook engineer to push the dish down toward the concrete roof. “We’re going to mis-point it,” he tells me. When they do, the robotic arm automatically swivels the dish back into position—a move with an infinitesimal margin of error, given the distance and the size of the targets.
This is the Southern California outpost of the Facebook Connectivity Lab, the research operation that’s building the company’s massive internet-dispensing drone, Aquila. Today these two antennas send data between the roof and a nearby mountaintop, but in the future, if all goes well, they’ll transmit data from a station on the ground like this one to a flying drone on the other side of the country. Then the drone will beam the signal down, like a flying cell tower, to people’s devices on the ground below.
Getting two stationary antennas to lock eyes across a single area code might seem like a far cry from that final goal, but Facebook is making progress. Three weeks later Tiwari and his team attach an antenna to the underbelly of a two-seat Cessna for a first stab at trading data with a moving target. With a Facebook engineer riding shotgun, the plane does several circles over the San Fernando Valley. It takes a while, but the airborne antenna eventually establishes a connection to the ground station in Woodland Hills, sending and receiving relatively modest amounts of data—for now.
If a hurricane or an earthquake or a terrorist attack knocks out communications, one of Facebook’s drones could be deployed relatively quickly.
Zuckerberg created the Connectivity Lab in 2014, with the vague idea that he wanted to build new ways of delivering the internet to unconnected parts of the world. He organized the lab around an employee named Yael Maguire, a physicist who had helped oversee the company’s recent sweeping effort to rebuild all the hardware that underpins its social network. At first Zuckerberg and Maguire thought they’d use satellites for the internet project. But one afternoon in 2013, an engineer on Maguire’s team had come across the records of an old Defense Department project called Darpa Vulture that proposed to build a high-altitude, solar-powered drone that could stay in the air for months. One of many applications that Darpa envisioned was that the drone might fly over the eye of a hurricane and drop in a swarm of sensors to study the inner dynamics of the storm.
Inspired in large part by that program, Facebook set out to build a drone that looks very much like the one imagined by Darpa. The company completed a prototype of the aircraft—a solar-powered V-shaped prop, with a wingspan greater than that of a Boeing 737 and the weight of a grand piano—earlier this year. And while delivering internet access to chronically underserved areas is still the company’s primary aim, the idea that these drones might be useful in natural disasters never left the engineers’ minds. Maguire says the company has already discussed the possibility with telecom companies in island nations vulnerable to earthquakes and tsunamis. Facebook isn’t alone in thinking this way. Google, which is already relatively far along in a project to deliver the internet to remote places via high-altitude balloons, is also exploring ways to deliver cellular signals in disaster areas. According to the International Federation of the Red Cross, information networks are just as important as food, water, and shelter in a disaster zone—and the lack of them can be just as catastrophic. If a hurricane or an earthquake or a terrorist attack knocks out communications, one of Facebook’s drones could be deployed relatively quickly. “The plane can move,” Maguire says.
People who work in disaster response instinctively worry about the idea that one major company might control access to the internet in the field. But Facebook plans to open source its antenna and drone designs, so that anyone—from the United Nations and the Red Cross to government agencies and providers like Verizon or AT&T—will be able to build and operate the same hardware. The idea is that hundreds or even thousands of these drones will be in the sky at any given time, forming a high-altitude web of internet transmission.
That alone sounds futuristic, but there’s more: As time goes on, Facebook Live and Safety Check will generate an unprecedented amount of data about disasters. Along with the hours of video that Facebook Live can deliver from inside a crisis zone, Safety Check could provide a map of who is safe and where and perhaps why. Meier, whose recent book, Digital Humanitarians, discusses the challenges of dealing with what he calls “big crisis data,” believes these services will eventually generate far too much data for humans to hunt through and understand manually. His hope is that modern machine learning—the kind of technology that has already proven so useful at recognizing faces in photos and finding what’s relevant in internet data—will start to discern important patterns in crises too. And of course Facebook, along with Google and a few others, is at the forefront of machine learning research. Zuckerberg didn’t seem to have any specifics, but he went out of his way to tell me he thought artificial intelligence was going to play a big role in identifying moments of crisis on the network.
Cell Tower in the Sky
As daylight spread across the sky on that Sunday morning in June, millions of Central Floridians who slept peacefully through the night woke up to the news that Orlando had become the site of the worst mass shooting in American history. For Moran, it was surreal to watch as thousands upon thousands of people checked in as safe—people in the suburbs, many of whom had probably never even heard of Pulse. People who had never really been in danger.
For Moran, the flood of check-ins was especially hard to take given that he was still waiting to hear about Drew Leinonen. Facebook can do a lot to speed up the information that ripples through a social network, but it can’t do anything to speed up a crime scene investigation. It wasn’t until Monday afternoon—roughly 36 hours after the shooting started—that the city of Orlando said Leinonen was among those shot and killed by the gunman.
There is plenty more uneasiness to come as Facebook continues to feel out its role in mass emergencies. Already, for instance, people are starting to consider the psychological toll that graphic live videofeeds can take on viewers. And there are other concerns as well.
This fall, after police shot Keith Scott, an African American man in Charlotte, North Carolina, a number of protests broke out in the city, and some became violent. The demonstrations, predominantly by African Americans, were fairly localized, and the violence was even more so, but the live video that streamed out from the scene gave a different impression. “When you start seeing a lot of video showing the same types of protest, it gives the impression that the entire city is engulfed,” says Campbell, the emergency manager in North Carolina. At a certain point, Facebook’s algorithms turned on Safety Check because the number of people talking about the event had reached a critical mass. But the particular makeup of the alarmed community in this “community-driven” version of Safety Check became uncomfortably clear as a sea of white people from the suburbs began checking themselves in as safe.
“If you believe something must be fully perfect just to get started,” Zuckerberg says, “a lot of the time you’ll never get started.”
While the new version of Safety Check may be superior to the one that relied solely on Facebook’s corporate discretion, Meier argues that it’s not enough to put out a product that’s purely “community-driven.” There needs to be someone with experience at the helm—preferably experience with disasters, civil unrest, and conflict—who can at least keep an eye on where the algorithms are leaning and look for red flags. But that kind of person is scarce in Silicon Valley. “You don’t have experts in-house who have worked in disaster response, crisis response, conflict zones,” he says. “You have policy folks, privacy folks, and engineers.” That’s starting to change a little: Tech companies are beginning to consult with people from the humanitarian aid world. Meier himself has consulted for Facebook. But the company, by nature, still has a strong bias toward crowdsourced knowledge; Naomi Gleit, director of the Social Good team, says she’s skeptical that any one social scientist or disaster expert could know better than the community.
For Zuckerberg, the only way to find the optimal formula is to keep pushing services into the world and then watch what happens. “If you believe that something must be fully perfect just to get started,” he says, “a lot of the time you’ll never get started.” The company’s guiding philosophy is that it should draw the line where the community wants it drawn. “Our job is to learn—as quickly as we can—what we can do for the community.”
At the same time, the community is learning as well. Facebook turned Safety Check on once again in Orlando in October when Hurricane Matthew buzzed Florida’s eastern coast. As the edge of the storm moved over the city, Moran saw a friend check in as safe on Facebook. It was uncanny to see that notification again. Memories flooded back. But Moran knew what to do. Without anyone having to ask him his status, he navigated over and checked himself in as safe.
Inside Facebook’s Internet-by-Air Tech
This article appears in the December issue. Subscribe now.