Norman always sees the worst in things.
That's because Norman is a "psychopath" powered by artificial intelligence and developed by the MIT Media Lab.
Norman is an algorithm meant to show how the data behind AI matters deeply.
MIT researchers say they trained Norman using the written captions describing graphic images and video about death posted on the "darkest corners of Reddit," a popular message board platform.
The team then examined Norman's responses to inkblots used in a Rorschach psychological test. Norman's responses were compared to the reaction of another algorithm that had standard training. That algorithm saw flowers and wedding cakes in the inkblots. Norman saw images of a man being fatally shot and a man killed by a speeding driver.
"Norman only observed horrifying image captions, so it sees death in whatever image it looks at," the MIT researchers behind Norman told CNNMoney.
Related: Amazon asked to stop selling facial recognition tech to police
Named after the main character in Alfred Hitchcock's "Psycho," Norman "represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms," according to MIT.
We've seen examples before of how AI is only as good as the data that it learns from. In 2016, Microsoft (MSFT) launched Tay, a Twitter chat bot. At the time, a Microsoft spokeswoman said Tay was a social, cultural and technical experiment. But Twitter users provoked the bot to say racist and inappropriate things, and it worked. As people chatted with Tay, the bot picked up language from users. Microsoft ultimately pulled the bot offline.
The MIT team thinks it will be possible for Norman to retrain its way of thinking via learning from human feedback. Humans can take the same inkblot test to add their responses to the pool of data.
According to the researchers, they've received more than 170,000 responses to its test, most of which poured in over the past week, following a BBC report on the project.
MIT has explored other projects that incorporate the dark side of data and machine learning. In 2016, some of the same Norman researchers launched "Nightmare Machine," which used deep learning to transform faces from pictures or places to look like they're out of a horror film. The goal was to see if machines could learn to scare people.
MIT has also explored data as an empathy tool. In 2017, researchers created an AI tool called Deep Empathy to help people better relate to disaster victims. It used technology to visually simulate what it would look like if that same disaster hit in your hometown.