Cinema has frightened us of artificial intelligence like Skynet, who can handle all kinds of weapons at will. To prevent fiction from becoming reality, a group of experts in the field signed a good intentions letter promising never to develop intelligent weapons, including Elon Musk (whose last week was not the best of his life), the three co-founders of Google’s most advanced artificial intelligence project, DeepMind, and a number of experts from well-known technology companies such as Skype, as well as scientists from many American universities.
Human Vs Killer Robots
The text warns of the dangers of weapon systems that use artificial intelligence to “select and attack targets without human intervention”, a moral and pragmatic threat. The signatories argue that the decision to take a human life should “never be delegated to a machine” and that the widespread use of this technology would “dangerously disbalance every country and every individual”.
The letter was originally published at the Joint International Conference on Artificial Intelligence in Stockholm, organized by the Institute for Future Life, and despite the agreement of experts, there is still no international regulation on how autonomous and intelligent weapons should be treated. One of the reasons is the complexity of determining which systems can and cannot be considered intelligent. For example, a flak that finds targets alone but does not fire is not clear in which category it would work.
The unification of the world of technology and the world of war has already caused bubbles in companies like Google or Microsoft. Its employees have protested against the Pentagon’s system development programs, which they believe could be used for warfare with autonomous weapons.