New technology brings great promise, and as many problems. Smartphones put access to infinite knowledge in our pockets, but led to the rise of tech addiction. The social media platforms that connected billions of people were turned against democracy.
And so it is with artificial intelligence, which could fundamentally change the world while contributing to greater racial bias and exclusion.
Much of the focus on any downsides of artificial intelligence has been on things like crashing self-driving cars and the rise of machines that kill. Or, as CNN commentator Van Jones put it at a discussion on the topic last week, "What about Terminator?"
But many of the researchers behind this technology say it could pose a greater threat to society by adversely impacting the the poor, the disenfranchised, and people of color.
"Every time humanity goes through a new wave of innovation and technological transformation, there are people who are hurt and there are issues as large as geopolitical conflict," said Fei Fei Li, the director of the Stanford Artificial Intelligence Lab. "AI is no exception."
Related: Leading AI researchers vow not to develop autonomous weapons
These are not issues for the future, but the present. AI powers the speech recognition that makes Siri and Alexa work. It underpins useful services like Google Photos and Google Translate. It helps Netflix recommend movies, Pandora suggest songs, and Amazon push products. And it's the reason self-driving cars can drive themselves.
One part of AI is machine learning, in which a system analyzes massive amounts of data to make decisions and recognize pattens on its own. And that data must be carefully considered so that it doesn't reflect or contribute to existing biases.
"In AI development, we say garbage in, garbage out," Li said. "If our data we're starting with is biased, our decision coming out of it is biased."
We've already seen examples of this. A recent study by Joy Buolamwini at the M.I.T. Media Lab found facial recognition software has trouble identifying women of color. Tests by The Washington Post found that accents often trip up smart speakers like Alexa. And an investigation by ProPublica revealed that software used to sentence criminals is biased against black Americans.
Addressing these issues will grow increasingly urgent as things like facial recognition software become more prevalent in law enforcement, border security, and even hiring.
Many of those who gathered at last week's discussion, "AI Summit - Designing a Future for All," said new industry standards, a code of conduct, greater diversity among the engineers and computer scientists developing AI, and even regulation would go a long way toward minimizing these biases.
Technical approaches can help, too. The Fairness Tool, developed by Accenture, scours data sets to find any biases and correct problematic models.
"One naive way people were thinking about removing bias in algorithms is just, 'Oh, I don't include gender in my models, it's fine. I don't include age. I don't include race,'" said Rumman Chowdhury, who helped develop the tool. But biases aren't created solely feeding a facial recognition algorithm a diet of white faces.
"Every social scientist knows that variables are interrelated," said. "In the US for example, zip code [is] highly related to income, highly related to race. Profession [is] highly related to gender. Whether or not that's the world you want to be in that is the world we are in."
Diversifying the backgrounds of those creating artificial intelligence and applying it to everything from policing to shopping to banking will go a long way toward addressing the problem, too. This goes beyond diversifying the ranks of engineers and computer scientists building these tools to include the people pondering how they are used.
"We need technologists who understand history, who understand economics, who are in conversations with philosophers," said Marina Gorbis, executive director of the Institute for the Future. "We need to have this conversation because our technologists are no longer just developing apps, they're developing political and economic systems."
Those conversations, she said, are essential to ensuring AI does more good than harm.