Facebook had to disable this AI because it had created a language of its own

A Facebook research section developed an AI to improve Facebook chatbots. To test it, they left two machines of this type while maintaining a free conversation with each other. The result was the most unexpected; they created a new language. At first, they thought it was a mistake, but it was found that they were communicating in a new language developed by ‘them’.

AI, Facebook had to disable this AI because it had created a language of its own, Optocrypto

A language we do not understand could be dangerous.

Far from being a good result, the researchers in charge of this AI project have decided to ‘turn it off.’ But not because of fear or because they feared that it was the beginning of a malicious Artificial Intelligence.

AI Loss of control?

The reason was much more straightforward: if AI decided to communicate in its language, we would lose control over it.

It is true that the language they had created was much more efficient. However, we could never get to communicate, or at least would be very complicated. The results of the conversation can be absurd to the naked eye, but they are not:

Bob: “I can see ii everything else.”
Alice: “balls have zero to me.”

The reason for turning off the AI is straightforward: there could come a time when you lose control over it.

Repeating ‘to me’ so often does not mean that she has gone mad, but she wants, in this case, seven units. This language is much more logical than ours (or, in this case, than English, which was the language used initially). The trick, as we said, is to understand it.

What is Machine Learning: A Beginner’s Guide

That is not the first time it has happened. In the past, Google also had to resort to the same solution after presenting the same problem. As already done Facebook, it is best to deactivate it and look for a solution before it loses control completely.