Every social website has harassment. It's difficult to combat and even define.
As hate speech and online bullying come to a head, the tech industry is working on solutions while keeping free speech alive.
At Fusion's Real Future Fair in Oakland on Tuesday, technologists discussed how to protect people online, and what sorts of constraints -- technical and sociological -- handicap safety and privacy.
"A big flaw with engineers in particular is relying too much on data and thinking that code can be impartial," said Danielle Leong, an application engineer on the community and safety team at GitHub.
Related: Twitter launches new tools to fight harassment
"And it's not -- it's built by people, and people have their own internal biases. They're going to build that into their code no matter how impartial they think they are."
Leong joined Caroline Sinders, a Buzzfeed Eyebeam Open Lab fellow and machine learning designer, on "An Internet without A**holes," a panel that explored companies' responsibilities to cut down on harassment.
The panel's timing coincided with Twitter's rollout of new anti-harassment tools, which included an option for reporting hate speech.
Twitter (TWTR) has been widely criticized for its failure to combat online harassment -- an issue that was particularly salient during the U.S. presidential election and illustrated by the millions of anti-Semitic tweets targeting journalists this year.
Sinders said Twitter's efforts to combat the issue are like trying to turn around the Titanic, noting the social behemoth can't flip a switch to solve widespread harmful activity. When Twitter builds anti-harassment tools, they must work across all platforms: iOS, Android, desktop, mobile web, and wherever tweets appear across the internet, Sinders said.
Social coding site GitHub has also experienced harassment problems, both internally and on its site. Leong's team is responsible for identifying issues and building tools to prevent further harassment, but there is no simple solution. She said creating a document outlining acceptable conduct on GitHub was a 19 month-long project.
"It took a really long time to get to a consensus about what harassment looks like, but that's one of the main things [companies] have to do," Leong added.
Definitions and codes of conduct vary by site: GitHub is a productivity tool that programmers use to do their jobs, while Twitter is a sharing platform. So GitHub's standards for acceptable content will be different than Twitter's practices.
Related: Strangers trolling you on social media are on the rise
Leong emphasized the people making those decisions should represent the diversity of people using the products, and that will provide holistic feedback about how users may abuse others. At GitHub, the community and safety team is half women of color and 30% transwomen -- far more diverse than most companies in Silicon Valley. An average business in the area is disproportionately white and male.
Establishing a definition of harassment and inappropriate behavior is crucial to anti-harassment efforts, and companies should think about how people could mistreat tools or systems throughout the creation process, Sinders said.
When building new features, it's important for Twitter, Facebook and other social companies to think through every possible scenario; not just how people will use it as intended but how it could be abused, she added.
While tools exist to police speech, quelling mean or argumentative behavior entirely isn't necessarily the ideal solution, Sinders stressed.
"I am really hesitant to say you can code out dickishness. Because who gets to decide those parameters and how do we make sure those parameters are ethical?" Sinders told CNNMoney. "I do think that if I want to leave a conversation, there should be a tool to let me do that."