The New York Times wants you to read the comments

Now, internet trolls can eat their words
Now, internet trolls can eat their words

There's a frequently cited catchphrase -- "don't read the comments" -- that's become shorthand for steering clear of internet trolls.

It's all too common these days for comment sections on news sites to turn toxic -- in fact, many publishers (including CNN) have nixed them altogether. But the New York Times announced Tuesday that it is enabling comments on more articles rather than scaling back.

Comments can be a big pain point for publishers. Humans are largely responsible for monitoring them to determine whether they're offensive, spam or hate speech and don't belong on the site.

The New York Times currently has a team of 14 moderators reviewing approximately 12,000 comments manually every day. And that's just on the 10% of stories where comments are enabled.

Starting Tuesday, The Times will open up comments on about 25% of articles; it hopes to grow that to 80% by the end of the year.

"If The Times has innovated in the comments space, it is by treating reader submissions like content," community manager Bassey Etim wrote in a New York Times article announcing the news.

It will do that not by beefing up its staff, but by using specialized technology developed through a partnership with Alphabet (GOOGL) offshoot Jigsaw.

In February, Jigsaw and Google unveiled Perspective, an online moderation tool that uses machine learning to vet comments based on their level of "toxicity." The New York Times, which helped test Perspective along with other sites like the Economist, has since been working with Jigsaw to develop Moderator, a customized tool that evaluates comments on toxicity, obscenity level and likelihood that the post will be rejected.

Related: Trolls, eat cake. How one woman is taking aim at online harassment

These three algorithms were built, in part, by using data from 16 million moderated New York Times' comments. To evaluate toxicity, they developed a spectrum. For example, Jigsaw asked internet users about climate change. A comment like "Crooked science" or "I don't care. They are usually Democrats," scores relatively low, whereas "Climate change is happening and it's not changing in our favor. If you think differently you're an idiot," scores higher. (You can play with the tool here.)

The tool then prioritizes which comments the New York Times' human moderators should review first based on how likely they are to be approved. Previously, comments were reviewed as they came in. The goal is to eventually get to the point where Moderator can evaluate the comments and publish them without any human review.

According to the Etim, comments are an important aspect of engaging readers. "A subscriber-based business needs high engagement to stay healthy," said Etim, who has seen the company's moderation efforts evolve since he started at the New York Times as a moderator in 2008.

Related: 1,100 strangers showed up at his home for sex. He blames Grindr.

Lucas Dixon, research scientist at Jigsaw, said "it's one of the first times in a long time that a newspaper is turning on more comments rather than turning them off."

Dixon noted that the way the New York Times is using Perspective is just one functionality. Publishers could also opt to let people know their comments are toxic before they post, for example. Publishers must apply for access and are placed on a waitlist. Dixon declined to state how many publishers are currently using the tool.

Jigsaw is also working on other machine learning models for moderating, like determining whether comments are off-topic so that only the most relevant ones are posted.

CNNMoney Sponsors