Harassment is the thorn in Twitter's side. It has stymied the platform's growth and continues to frustrate users.
To step up enforcement of hate speech, the company announced a new reporting tool and an improved mute feature on Tuesday.
Twitter has played whack-a-mole with harassment for years, trying to balance free speech while determining exactly what line tweets shouldn't cross. Over the years, it's slowly improved reporting tools, and in February, the company established a council to advise Twitter on how to handle harassment.
Yet the company routinely fails to address ongoing harassment that very obviously threatens people.
Hate speech has always been against Twitter's rules, and people could report it for themselves or for other accounts. The new option will allow users to report hateful content separately and is meant to make it easier to report hate speech to Twitter.
Last year, the company updated its policies to define hate speech, noting that language meant to "harass, intimidate, or use fear to silence another user's voice" is not allowed on the platform.
"Our hope is that by creating this option it will not only make it clearer to people that they can report this kind of content, but also make it clearer why they are reporting certain types of content," Del Harvey, Twitter's vice president of trust and safety, told CNNMoney.
Related: Journalists face rise in anti-Semitic tweets, fueled by Trump supporters
The new feature appears as a button in the reporting flow. Instead of saying something is "abusive or harassing," it lets you mark tweets as hateful. Harvey said the new tool will improve how Twitter deals with reports from those who are not involved in the incident of harassment.
Because hate speech varies in language, culture and context, Twitter trained its global support teams about its historical and cultural implications of it. For instance, Twitter employees learned common tropes and phrases associated with anti-Semitism, and slang terms that are used to reference refugees or migrants, Harvey said.
Anyone at Twitter (TWTR) who works on safety issues or handles support completed the training. Twitter declined to give exact numbers on how many people deal with harassment reports, but a spokeswoman said there are people working on it around the clock. Humans review each issue and decide whether or not it's abusive.
Zuckerberg: Idea that fake news influenced election is 'crazy'
Twitter activity during the 2016 U.S. presidential election is an unfortunate example of the overwhelming hate speech spewed on the platform. Anti-Semitic Twitter attacks increased drastically as millions of tweets targeted journalists in particular.
Screenshots shared frequently on its own platform demonstrate how Twitter fails to recognize abusive and harmful language as harassment, and even high-profile celebrities have left or taken breaks from the service.
To give users more control over the content they see, Twitter is expanding its mute feature to let people filter out words and phrases they don't want to see in their mentions. Over time, the company will expand it anywhere you see tweets, like on your timeline or in searches. Twitter launched the original mute feature -- which lets you mute accounts without blocking them -- in 2014.
The new features are part of an ongoing effort to curb harassment. The company knows it still has considerable work to do to make Twitter better.
"Every single that thing we do is going to be a work in progress," Harvey said. "Our reporting system is going to continue changing, we're going to continue iterating on it, continue making improvements to it."