Thousands of the world's foremost experts on artificial intelligence, worried that any technology they develop could be used to kill, vowed Wednesday to play no role in the creation of autonomous weapons.
In a letter published online, 2,400 researchers in 36 countries joined 160 organizations in calling for a global ban on lethal autonomous weapons. Such systems pose a grave threat to humanity and have no place in the world, they argue.
"We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody," said Anthony Aguirre, who teaches physics at the University of California-Santa Cruz and signed the letter.
Flying killer robots and weapons that think for themselves remain largely the stuff of science fiction, but advances in computer vision, image processing, and machine learning make them all but inevitable. The Pentagon recently released a national defense strategy calling for greater investment in artificial intelligence, which the Defense Department and think tanks like the Center for a New American Security consider the future of warfare.
"Emerging technologies such as AI offer the potential to improve our ability to deter war and enhance the protection of civilians in the form of fewer civilian casualties and less collateral damage to civilian infrastructure," Pentagon spokesperson Michelle Baldanza said in a statement to CNNMoney.
"This initiative highlights the need for robust dialogue among [the Department of Defense], the AI research community, ethicists, social scientists, impacted communities, etc. and having early, open discussions on ethics and safety in AI development and usage."
Although the US holds the advantage in this field, China is catching up. Other countries are gaining ground as well. Israel, for example, has sold fully autonomous drones capable of attacking radar installation to China, Chile, India, and other countries.
The development of artificially intelligent weapons surely will continue despite the opposition of leading researchers such as Demis Hassabis and Yoshua Bengio and premier laboratories like DeepMind Technologies and Element AI. Their refusal to "participate in [or] support the development, manufacture, trade, or use" of autonomous killing machines amplifies similar calls by others, but may be largely symbolic.
"This may have some impact on the upcoming United Nations meetings on autonomous weapons at the end of August," said Paul Scharre at the Center for a New American Security and author of "Army of None," a book on autonomous weapons. "But I don't think it will materially change how major powers like the United States, China, and Russia approach AI technology."
The researchers announced their opposition during the International Joint Conference on Artificial Intelligence in Stockholm. The Future of Life Institute, an organization dedicated to ensuring artificial intelligence doesn't destroy humanity, drafted the letter and circulated it among academics, researchers, and others in the field.
"Artificial intelligence (AI) is poised to play an increasing role in military systems," the letter states in its opening sentence. "There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI."
Military use, the letter states, is patently unacceptable, and "we the undersigned agree that the decision to take a human life should never be delegated to a machine."
Machines that think and act on their own raise all sorts of chilling scenarios, especially when combined with facial recognition, surveillance, and vast databases of personal information. "Lethal autonomous weapons could become powerful instruments of violence and oppression," the letter states.
Related: Google says it won't use AI for weapons
Many of the leading US tech companies are grappling with the very issues the Future of Life Institute (which is funded in part by Elon Musk) raises in its letter. In June, Google (GOOG) CEO Sundar Pichai outlined the company's "AI principles," which make clear that the company will not develop any tools for weapons designed primarily to inflict harm. The announcement followed an employee backlash against Google's role in a US Air Force research project that critics considered a step toward autonomous weapons. Jeff Dean, Google's head of AI research, is among those who have signed the letter.
Aguirre said he's hopeful that major companies will add their names to Wednesday's letter, or at least follow Google's lead in stipulating where and how its AI technology can be used.
"There's a limited window between now and when these things really start to be widely deployed and manufactured," Aguirre said. "Consider nuclear weapons —lots of people would like to not have them, but getting rid of them now is extraordinarily hard."