How much are social media companies responsible for the content they publish? There’s a common argument that says the monster is beyond their control. YouTube gets 65 years worth of footage uploaded every day. Who is possibly going to watch every minute to ensure none of it is law breaking? Whether or not we could have avoided this position by earlier regulatory scrutiny or not, we are where we are now, and it’s hard to envisage the genie getting stuffed back into the bottle, no matter how much the government may want to.
But is that actually the case?
Two things have made me revisit this belief. The first is reading Jonathan Taplin’s excellent book Move Fast and Break Things which highlights one simple counterpoint: When was the last time you spotted porn on YouTube? The answer is, most likely, never. And that’s not because people don’t upload it. Somehow Google has managed to neutralise the threat in a way it hasn’t matched with terrorist propaganda videos and neo-Nazi thuggery. Maybe it’s a more complex problem, but it appears that where there’s a will, there’s a way.
“Change your location to Germany and, thanks to the power of Germany’s Strafgesetzbuch section 86a law, accounts with swastika imagery magically disappear”
The second is related. If you use Twitter, you’ll no doubt have realised two things recently. One: it’s hard to know who’s real and who’s a bot programmed to pursue a political agenda; and two: the site has a real problem with neo-Nazis and white supremacists. My gut feeling was that Twitter could do diddlysquat about these problems for the same reasons YouTube can wash its hands: there’s just not enough people to moderate and pretend to have a semblance of profitability.
But then Twitter user @christinapeterso found something interesting. If you change your location to Germany, thanks to the power of Germany’s Strafgesetzbuch section 86a law, white supremacist and accounts with swastika imagery magically disappear from the site.
I confirmed this myself. As far as Twitter knows, right now, I’m sat in Germany. And it works. Witness this screenshot from digging around before…
Twitter has the power, so why doesn’t it use it?
So, if Twitter can auto block white supremacist accounts, why doesn’t it? Its terms and conditions on the subject are pretty clear:
Examples of what we do not tolerate includes, but is not limited to behaviour that harasses individuals or groups of people with:
- violent threats;
- wishes for the physical harm, death, or disease of individuals or groups;
- references to mass murder, violent events, or specific means of violence in which/with which such groups have been the primary targets or victims;
- behaviour that incites fear about a protected group;
- repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone.
We know that historically the site has been pretty lax about actively enforcing these things, but here’s an answer that seemingly blocks hateful accounts automatically… so why doesn’t Twitter let non-German residents access the secret sauce?
When I contacted Twitter to ask them directly, I got back a canned response pointing me to the terms and conditions. Specifically this section. “With hundreds of millions of Tweets posted every day around the world, our goal is to respect our users’ expression, while also taking into consideration applicable local laws”, a spokesman summarised.
“There’s a difference between protecting oppressed groups in authoritarian regimes and amplifying the voices of hate groups in open democracies, and conflating one with the other feels disingenuous.”
That feels suspiciously like paraphrasing “we won’t do anything unless we absolutely have to,” only with more cuddly idealistic shading. The company is perhaps still basking in the glow of 2011’s Arab Spring when it was taking credit for dictatorships collapsing across the Middle East. But there’s a difference between protecting oppressed groups in authoritarian regimes and amplifying the voices of hate groups in open democracies, and conflating one with the other feels disingenuous.
Freedom of speech > All else (except the threat of being banned)
This has long been an internal philosophical dilemma for Twitter, which is founded on the idea of free speech. “The people that run Twitter … are not stupid,” the company’s former head of news, Vivian Schiller, told Buzzfeed last year. “They understand that this toxicity can kill them, but how do you draw the line? Where do you draw the line? I would actually challenge anyone to identify a perfect solution. But it feels to a certain extent that it’s led to paralysis.” Another anonymous former employee put it more succinctly, commenting that Twitter’s “product inaction created a honeypot for assholes.”
So, what can we learn from this? Twitter is super-reluctant to do anything to damage its free speech credentials, even though it knows inaction is killing the platform. Meanwhile, we know that it can act if forced to by law – as it has done with its automatic filtering in Germany.
So it seems that governments do still have the power to enact significant change on social networks, but there are two caveats. Firstly, Germany’s laws against Nazi iconography and literature have been in effect since 1945 – as such these are deeply ingrained in German culture, and harder for tech giants, no matter how big, to circumvent. New reactive laws can both be lobbied against, and are harder to meaningfully implement if they’re already being broken on a daily basis.
“New reactive laws can both be lobbied against, and are harder to meaningfully implement if they’re already being broken on a daily basis.”
Secondly, any such laws depend on strong government to enforce them – the kind of strong government that doesn’t fear an opposition party pulling the rug out from underneath them by opposing.
A decade’s worth of public pressure alongside potential buyers being spooked by Twitter’s reputation wasn’t enough to make it change its ways. A proposed voluntary code of practice isn’t so much a slap on the wrist as a gentle caress.