Where the battle over free speech is being waged online

Meet the CEOs who decide what you see online
Meet the CEOs who decide what you see online

The delicate balance between supporting open expression and shutting down abusive content has become a flashpoint for the tech industry.

You don't need to look any further than Twitter's recent decision to suspend Rose McGowan's account after she tweeted a phone number in the wake of the Harvey Weinstein scandal. Decisions are politicized. Transparency is questioned. And tensions are rising.

"Are we like the phone company where it's creepy for us to listen in?" asked one prominent tech CEO, Matthew Prince. "Or are we like a newspaper that should have an editorial decision?" (Watch the whole series Divided We Code)

These questions have thrust companies into the crosshairs of an evolving discussion on how to monitor content. Companies like Facebook and Twitter have argued they're tech platforms -- not media companies. In the early days of the internet, the idea of an open web, free from censorship, was key to its success. But that hands-off approach is becoming less defensible, as we see Russian troll farms buying political ads on Facebook, disgruntled exes posting revenge porn and ISIS recruits being radicalized online.

For Prince, that tension culminated one morning this summer. His company, Cloudflare, is a large but mostly invisible web infrastructure company that helps websites run quicker and provides protection from attacks. 10% of all internet requests pass through its network, without which sites could be left vulnerable to cyberattacks. But when one of his customers said Prince wouldn't kick him off because Cloudflare's senior leadership was itself full of white supremacists (it's not), Prince took decisive action.

He terminated service for neo-Nazi site The Daily Stormer, saying it was a one-time decision not meant to serve as a precedent.

"I woke up in a bad mood and decided someone shouldn't be allowed on the internet," Prince wrote in a memo to employees in August, "No one should have that power." The son of a journalist, Prince told CNN Tech that he takes issues of free speech "very, very seriously," but he also has the right "not to do business with jerks."

DWC free speech matthew prince
Matthew Prince is the CEO of Cloudflare

The Cloudflare CEO sat down with CNN Tech to describe what's happened since that one controversial decision. Most notably, he said Cloudflare has received requests to terminate more than 3,500 different customers -- including from governments, people trying to enforce copyright law, and others who simply find particular content problematic.

"And the amazing thing is that it's not just neo-Nazis," Prince said. "It's far right, far left, middle people, things that people just think are disgusting, things that people might disagree with because they don't like one point of view or another."

He said part of the reason Cloudflare has been able to deflect these types of requests in the past is that the company has never taken a political position -- it has treated all content equally. Now, he worries it will be much harder to use that defense. He's concerned, for instance, that because he kicked off one site, he will no longer be able to fend off foreign governments' requests to censor LGBT organizations in countries where they're persecuted.

As tech companies grow in their ability to shape culture and communication, the question of who should have the power to make these weighty decisions becomes even harder to answer. Meanwhile, social networks are starting to accept responsibility for writing algorithms that better detect hate speech and online abuse, according to Andrew McLaughlin, a former policy director at Google and former deputy chief technology officer for President Obama.

"I think the obvious trigger for it is the Trump election and the spread of fake news. And that has caused of lot of these companies to do some soul-searching where they say, 'Alright, we now have to accept that we can't be neutral,'" McLaughlin said. "We're making choices that are incredibly consequential for what speech gets aired and seen by ordinary people."

Tech companies are also attempting to roll out stopgap measures to combat harassment. Last month, Instagram introduced a tool that allows users to filter comments. Users say it's a step in the right direction, but there's still a lot to be done to root out the trolls on social media.

And earlier this month, Twitter outlined policy changes, including one that addresses how the site plans to treat hateful imagery. The content in question will be blurred and users will need to manually opt in to view. But what exactly Twitter defines as a hate symbol wasn't clearly spelled out.

Some tech executives, including Prince, argue that the responsibility falls on political institutions to set clearer guidelines. While he said he's not arguing for more regulation, Prince said tech CEOs don't have the same accountability as elected officials. Others, like McLaughlin, are less trustful of the government becoming the gatekeeper of online speech.

"I'm not a big fan of governments getting directly involved in the management of tech companies," said McLaughlin. "History shows that that power tends to be abused pretty easily."

It's not just issues of online abuse and harassment that have cropped up in recent years. The 2016 election thrust the issue of fake news onto center stage.

"There's a line between abuse and misinformation, and most of these companies for a while, and including Twitter, were more focused on abuse," said Ev Williams, cofounder of Twitter and CEO of Medium. "I think the misinformation thing is something that's come up really in the last year much more dramatically."

Williams has long believed in the internet's role in the free exchange of information. But, lately, he said, identifying trustworthy sources is "something that we really need to work on building into these systems more."

"Silicon Valley is a place of optimism, [but] it can be blind optimism," Williams said. "That's part of the evolution that we're going through -- we're no longer as blind."

Controlling and labeling misinformation is one of the biggest challenges facing tech companies today. Facebook, Twitter and Google increasingly must determine the difference between diverse political viewpoints and things that are just plain inaccurate, Williams said.

"That's when some people are calling for editorial guidelines," he said. "And you get into an area where most tech companies would be like, 'It's not something that really fits in our model or that we would even be good at.'"

But whether they like it or not, tech platforms are being called on to take a more active role in identifying abuse, harassment and fake news.

"There's a principle that evil festers in darkness. One of the things you don't want to do is basically suppress racist speech in a world where they can just go elsewhere, and do their evil in darkness," McLaughlin said. "The product balance being struck here is when we find ways to elevate and suppress without censoring, and I actually think that is possible."

This story originally published on October 29, 2017. (Watch the whole series Divided We Code)

CNNMoney Sponsors