Facebook, Twitter and Google have been accused by lawmakers in the U.K. of allowing racist content and hate speech to spread via their platforms.
A parliamentary committee report published Monday alleged that the social media firms have prioritized profit over user safety by continuing to host unlawful content. The report also called for "meaningful fines" if the companies do not quickly improve.
"The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content," the Home Affairs Committee report said. "Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law."
The report, which was commissioned after lawmaker Jo Cox was murdered by a far right extremist, found "repeated examples" of social media companies failing to remove "dangerous terrorist recruitment material," as well as content that promoted the sexual abuse of children or incited racial hatred. In some case, the items remained on the sites even after being flagged.
The report's recommendations are not binding, but they come as social media firms face increased pressure over hate speech in Europe.
Related: Facebook, Twitter face fines up to $53 million over hate speech
Facebook (FB) policy director Simon Milner agreed "that there is more we can do to disrupt people wanting to spread hate and extremism online." He said the company was working closely with its partners to improve its approach.
Twitter (TWTR) executive Nick Pickles said his company had introduced a "range of brand new tools to combat abuse" and hired additional staff.
Google (GOOGL), which owns YouTube, did not immediately respond to a request for comment.
The tech titans all say they adhere to local laws. But the industry has also expressed worries about the dangers of censorship, and how authoritarian regimes may treat the services.
The parliamentary committee, however, is unlikely to accept that tech giants are already doing all they can to police hate speech. The report points to the recent example of YouTube, which faced an advertiser exodus in the U.K. after product pitches were found displayed on content including videos from former Ku Klux Klan leader David Duke.
"We believe it to be a reflection of the laissez-faire approach that many social media companies have taken," the report said. "We note that Google can act quickly to remove videos from YouTube when they are found to infringe copyright rules, but that the same prompt action is not taken when the material involves hateful or illegal content."
Related: Big tech firms are too slow to remove hate speech
The lawmakers went on to criticize the social platforms for relying on users to report extremist and hateful content. The lack of effective moderation means, for example, that the Metropolitan Police's Counter Terrorism Internet Referral Unit has to monitor the sites for extremist content.
"They are, in effect, outsourcing the vast bulk of their safeguarding responsibilities at zero expense," the report said.
Many European countries have strict hate speech laws that make it illegal to deny crimes committed by the Nazis and other despotic governments.
In April, the German cabinet approved a plan to start fining social media companies as much as €50 million ($53 million) if they fail to quickly remove posts that breach German law.
The European Union also criticized the companies for failing to promptly remove hate speech in December.