The company Artificial Intelligence, DeepMind, has decided to open an ethics unit to deal with fears about the control that could have AI in humans. Artificial Intelligence (AI) has gained a lot of strength in recent years. And thanks to new software technologies, particularly those related to deep neural networks. These have solved problems that seemingly were unsolvable since many years. An example of this is the Go game.
iPhone 8 3D Laser Scan For AR and Face Recognition
On the other hand, the AI, to begin to have essential successes, faced some critics and situations. That perhaps speak sometimes of the ignorance on the subject. Even Hawking has repeatedly said that AI will kill the human race. And that we are in obvious danger. Probably Hawking does not know that these advances that have been seen many times cannot be extended to other activities, other domains. So the alleged danger that machines take the lead and decide “smartly”. And according to their criteria is something that does not mean it will not even happen soon.
However, the company of Alphabet (Google), DeepMind, has decided to define a unit of ethics. That allows addressing all these fears about the control that could have AI in humans. The idea of this is to launch a group of “ethics and society,”. And to study the impact of new technologies in our current social environments.
Google Brain Clearing the Blurr Image Convert
In addition, the announcement was made by the company DeepMind. That has the base in London, bought a few years ago by Google. “As scientists who develop AI technologies, we have a responsibility to conduct ourselves and support open research with the implications that our work has,” said DeepMind’s Verity Harding and Sean Legassick.
Google’s DeepMind begins to learn from experience.
“At DeepMind we started working on the premise that all AI applications should be kept within human control and used for beneficial social purposes. Understanding this means putting rigorous scientific questions into the most sensitive challenges we face, “Harding and Legassick point out, adding,” If AI technologies are to serve society, they should mold into the priorities and concerns of society .”