Google said Thursday it feels a sense of responsibility and will not design or deploy its AI tech in areas “whose principal purpose is to cause or directly facilitate injury to people.” The company stated it will avoid the use of AI where there is a material risk of harm, except where it believes the benefits substantially outweigh the risks.
The company will continue its work with governments and the military in many areas, including cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue, the company’s Chief Executive Sundar Pichai said in a blog post.
Pichai released what he termed “objectives for AI applications,” following a revolt by thousands of employees against the company’s work with the U.S. military to identify objects in drone video.
According to Pichai, Google understands the potential impact of such a powerful technology, and as the self-acclaimed leader in AI wants to get it right.
“We believe these principles are the right foundation for our company and the future development of AI,” said Pichai.
“While this is how we’re choosing to approach AI, we understand there is room for many voices in this conversation. As AI technologies progress, we’ll work with a range of stakeholders to promote thoughtful leadership in this area.”