Twitter
Advertisement

Google says it won't use artificial intelligence for weapons

Google announced it would not use artificial intelligence for weapons or to "cause or directly facilitate injury to people," as it unveiled a set of principles for these technologies.

Latest News
article-main
FacebookTwitterWhatsappLinkedin

TRENDING NOW

Google announced it would not use artificial intelligence for weapons or to "cause or directly facilitate injury to people," as it unveiled a set of principles for these technologies.

Chief executive Sundar Pichai, in a blog post outlining the company's artificial intelligence policies, noted that even though Google won't use AI for weapons, "we will continue our work with governments and the military in many other areas" including cybersecurity, training, and search and rescue. 

The news comes with Google facing pressure from employees and others over a contract with the US military, which the California tech giant said last week would not be renewed.

Recently, unnamed sources as saying that a Google's cloud team executive announced told employees that the company would not seek to renew the controversial contract after it expires next year. The contract was reported to be worth less than $10 million to Google, but was thought to have potential to lead to more lucrative technology collaborations with the military.

Google did not respond to a request for comment. Google has remained mum about Project Maven, which reportedly uses machine learning and engineering talent to distinguish people and objects in drone videos for the Defense Department. "We believe that Google should not be in the business of war," the employee petition reads, according to copies posted online.

"Therefore, we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology." The Electronic Frontier Foundation, an internet rights group, and the International Committee for Robot Arms Control (ICRAC) were among those who have weighed in with support. "As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems," ICRAC said in an open letter.

With inputs from AFP Relax News

Find your daily dose of news & explainers in your WhatsApp. Stay updated, Stay informed-  Follow DNA on WhatsApp.
Advertisement

Live tv

Advertisement
Advertisement