Tech's biggest companies are spreading conspiracy theories. Again.

Facebook, YouTube fail to blunt conspiracy theories
Facebook, YouTube fail to blunt conspiracy theories

The most common line in Silicon Valley right now may be: We'll try to do better next time.

When Facebook (FB) inadvertently promoted conspiracy theories shared by users following a recent Amtrak crash, the company said it was "going to work to fix the product."

When Google (GOOGL) shared a conspiracy theory in its search results after a mass shooting last year in Texas, the company said it would "continue to look at ways to improve."

And when Google's YouTube spread conspiracy theories in the aftermath of the devastating shooting in Las Vegas, the video service decided to update its algorithm to prevent it from happening again.

But then it did happen again.

On Wednesday, YouTube and Facebook were each forced to issue yet another mea culpa for promoting conspiracy theories about David Hogg, a student who survived the mass shooting at a Florida high school last week.

The top trending video on YouTube early Wednesday suggested in all capital letters that Hogg, who has emerged as a leading voice for gun control since the shooting, was actually an "actor."

YouTube later removed the video, but not before it was viewed hundreds of thousands of times. In a statement, YouTube said its system "misclassified" the video because it featured footage from a reputable news broadcast.

"This video should never have appeared in Trending," a YouTube spokesperson said in a statement provided to CNN, which concluded with a familiar line: "We are working to improve our systems moving forward."

On Facebook (FB), Hogg was a trending topic for users Wednesday. But several of the top results for his name showed similar theories about Hogg being a paid actor. (Hogg, for his part, has knocked down these claims.)

Mary deBree, Facebook's head of content policy, called the posts "abhorrent" in a statement and said Facebook was removing the content.

Related: YouTube changes search to combat Las Vegas conspiracy videos

Conspiracy theories are not new, certainly in American life. But the potential for tech platforms to supercharge the reach of these theories is a societal threat unique to the modern era.

This familiar cycle of grandiose promises and atonement on this issue speaks to deeper concerns about whether tech companies are able, or willing, to adequately police their own massive platforms.

In the last week alone, Google has been called out for making offensive suggestions for search results about black culture and poverty, and several big tech companies have been mentioned in a federal indictment about Russian election meddling.

To use Silicon Valley's preferred parlance, it's now hard to escape the conclusion that the spreading of misinformation and hoaxes is a feature, not a bug, of social media platforms -- and their business models.

Facebook and Google built incredibly profitable businesses by serving content they don't pay for or vet to billions of users, with ads placed against that content. The platforms developed better and better targeting to buoy their ad businesses, but not necessarily better content moderation to buoy user discourse.

Under pressure from regulators and advertisers in recent months, the two companies have finally pledged to hire thousands of additional workers to moderate their platforms. Mark Zuckerberg, Facebook's CEO and co-founder, told investors in November the move could cut into the company's profit margins.

But even this could prove to be a drop in the bucket compared to what's needed.

facebook youtube parkland conspiracies

Facebook has said it expects to have 20,000 people working on safety and security issues by the end of this year. With 2.1 billion users, that is the equivalent of having one cop on patrol for every 100,000 citizens.

"People overestimate Facebook's resources and underestimate just how much content Facebook handles," Antonio Garcia Martinez, a former product manager at Facebook who helped develop its ad targeting system, said in an email. "Facebook has to go through literally billions of posts (photos, text, check-ins, etc.) a day."

As a result, he says, Facebook "simply cannot manually review each and every news post." The company often ends up relying on its users to flag questionable content. Martinez says this can create a "time lag" before it gets the attention of Facebook staff.

Google faces a similar problem. Even as it ramps up hiring, YouTube does not have humans curating which videos appear in its trending lists because it has so many trending tabs being constantly updated all over the world.

As with so many things in the tech industry, the preferred solution is more technology, which allows the companies to keep operating at the massive scale that makes them attractive propositions to investors and advertisers.

Both Facebook and Google are investing in artificial intelligence solutions to clean up their platforms. AI may allow the companies to better police their platforms without having to hire hundreds of thousands of workers.

But if this kind of panacea arrives at all, it will be in a distant future. Zuckerberg has said it will take "many years to fully develop these systems." In the meantime, new conspiracy theories are waiting to trend.