Faced with the controversy generated, after Google employees demonstrated against the company’s participation in the Pentagon’s Maven Project, claiming that it runs counter to the company’s “Don’t be evil” principle, its managers not only confirmed that they would not continue to participate in it, but also published a series of “Principles of Ethics in AI (Artificial Intelligence)”, to make it clear what the main objective of their AI projects is.
Google publishes its “Principles of AI ethics” and assures that it will not use AI for military purposes.
In an extensive article published by Sundar Pichai, Google’s CEO, they announced seven principles, on which they will base their projects in the future, according to Pichai, these will act as standards or commandments that will guide the research and development of their products.
Among the above principles, related primarily to the industrial, educational, political and social benefits of AI, the CEO states: “We will seek to avoid unfair impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, political or religious ability and beliefs. He further assured that they will continue to develop sufficiently prudent AI systems, and “we will seek to develop them in accordance with best practice in AI safety research.
In addition, Pichai emphasized the areas where they will not apply their AI projects, among which are listed:
Weapons or other technologies whose main purpose or implementation is to directly cause or facilitate injuries to people.
Technologies that cause or may cause damage
Technologies that collect or use information for surveillance that violates internationally accepted standards.
Technologies whose purpose violates the widely accepted principles of international law and human rights.
Although Google’s CEO said they would not work with weapons, he said: “While we are not developing IA for use in weapons, we will continue to work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ health care, and search and rescue.”
It should be noted that this section would include the Maven Project, since it focuses on the use of military drones that employ algorithms based on Artificial Intelligence techniques, such as in-depth learning and machine learning, to identify more precisely people or video objects in war zones (contradicting what was said earlier about not applying the IA in “Technologies that collect or use information for surveillance that violates internationally accepted standards”).
This does not make it clear whether the company will actually terminate the contract with the Maven Project in 2019, as Google representatives told Gizmodo. While the contract with the U.S. military is far from complete, it is unclear whether Google will enforce these principles in the immediate future, or after the agreement is finalized. This certainly leaves a lot of fabric to cut.