Home Tech Everyday Tech Top AI Developers Sign Pledge Against the Development of Killer Robots

Top AI Developers Sign Pledge Against the Development of Killer Robots

Since the advent of AI, a fear of most futurists has been the possibility that the technology may be misused to the detriment of mankind, specifically the development of killer robots. And considering the destructive weapons humans have created in just under a century, the threat of an army of AI-powered killing machines has always remained real.

Now, a group of top AI researchers from around the world has come together to sign a pledge where they have declared that they will never be a part of developing or manufacturing of robots that can kill human beings.

The moral issue

The pledge was signed by the industry’s most leading visionaries and brightest minds, like Elon Musk, Toby Walsh, and Stuart Russell. Companies like Google DeepMind, the European Association for AI (EurAI), and the XPRIZE Foundation have also signed the declaration and lend their support to the cause. In total, the pledge has garnered the support from 2,400 individuals and 160 AI companies. The event was organized by Future Life Institute (FLI).

A group of top AI researchers from around the world has come together to sign a pledge where they have declared that they will never be a part of developing or manufacturing of robots that can kill human beings. (Image: publicdomainpicturers / CC0 1.0)

Toby Walsh, a leading thinker and professor of AI at UNSW in Sydney, was driven to sign the pledge out of the moral aspects of using artificial intelligence to take human lives. “We cannot hand over the decision as to who lives and who dies to machines… They do not have the ethics to do so,” he says in an interview with Business Insider.

While it is true that simple machines themselves do not have any ethics and that their ethics are completely dependent on what their creators have coded into them, advanced AI is an altogether different ballgame. Given that it has the ability to think and decide for itself, the fact that it might find the option of killing a large number of people as “ethical” is definitely something that can end up fracturing the moral worldview of many people.

Meanwhile, Max Tegmark, president of FLI, was very happy at the kind of support the pledge had garnered. “I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect,” Techvibes quotes him.  

Effectiveness of implementation

While many have praised the pledge as a step in the right direction to develop a global framework against the misuse of AI, people are also apprehensive of whether this would be sufficient to stop the possibility of killer robots.

While it is true that simple machines themselves do not have any ethics and that their ethics are completely dependent on what their creators have coded into them, advanced AI is an altogether different ballgame. (Image: pixabay / CC0 1.0)
While it is true that simple machines themselves do not have any ethics and that their ethics are completely dependent on what their creators have coded into them, advanced AI is an altogether different ballgame. (Image: pixabay / CC0 1.0)

After all, weapons development firms, just like any other businesses, can only survive by making profits. And if they determine that they will be able to make more money by manufacturing and selling AI-powered killing machines, it would be naïve to assume that they will not take the opportunity. Moreover, the governments themselves might fund or encourage the development of such dangerous tech.

Yoshua Bengio of the Montreal Institute for Learning Algorithms is hopeful that such a pledge will be able to prevent businesses from investing in AI killing machines to a large extent. “This approach actually worked for landmines, thanks to international treaties and public shaming, even though major countries like the U.S. did not sign the treaty banning landmines. American companies have stopped building landmines,” he says to The Guardian.

While this is definitely positive news, it is far too early to assume that companies and governments will completely stop the development of AI as a weapon. 

Follow us on Twitter or subscribe to our weekly email

Vision Times Staff
Vision Times is a kaleidoscopic view into the most interesting stories on the web. We also have a special talent for China stories — read About Us to find out why. Vision Times. Fascinating stuff.

Most Popular

Hong Kong Sentences Democracy Activists for Violating ‘National Security’ Law

On Dec. 2, a Hong Kong court sentenced pro-democracy leaders Joshua Wong,  Ivam Lam, and Agnes Chow Ting to 13.5 months, 7 months, and...

Daimler Announces Plan to Manufacture Engines in China, Forgoing German Production

Stuttgart-based auto giant Daimler announced on Nov. 20 plans to begin joint production of hundreds of thousands of next-generation combustion engines for use in...

William Stanton, a Retired American Diplomat Who Is in Love With Taiwan 

William Stanton, former director of the American Institute in Taiwan (AIT) and current Vice President of the National Yang Ming University, remained in Taiwan...

New Report Highlights North Korea’s Persecution of Religious Believers

A recent report by the Korea Future Initiative has brought attention to the horrible religious rights violations taking place in North Korea. Titled Persecuting...