Elon Musk’s AI text, generation deepfakes for text is too dangerous for publication

Artificial intelligence continues to change our lives at an accelerated pace with new advanced applications we hadn’t even thought of. These applications are developed by researchers for the benefit of users by facilitating many tasks. But they can also be used for bad purposes by some malicious actors in love with evil, to destroy or profit at the expense of others. This behavior can cause some researchers to avoid publishing their research for fear of being distracted from its original use.

AI, Elon Musk’s AI text, generation deepfakes for text is too dangerous for publication, Optocrypto

In this logic, OpenAI, the non-profit organization supported by Elon Musk, has decided not to publish the results of its research for fear of misuse according to an article published yesterday in the Guardian. The creators of a revolutionary system of artificial intelligence capable of writing reports and works of fiction, called “Deepfakes for the Text”, felt obliged not to make their work publicly available for fear of possible improper use, and broke with their usual procedures for publishing their research and related source code.


In fact, the new AI model developed by OpenAI researchers called GPT2 is so good and the risk of malicious use is so high that they want to take more time to discuss the consequences of this technological advance before sharing all the research with the public. “This seems very real,” says David Luan, Vice President of Engineering at OpenAI, about the text generated by the system. He and his colleagues began to imagine how such a system could be used for hostile purposes. “A malicious person can be able to produce high-quality false messages,” says Luan.

For this reason, OpenAI preferred to publish a research paper on its results rather than its complete model and the 8 million web pages used to build the system. The lack of a basic ethical framework under which AI technology suffers and which does not allow the impact of trained AI models to be defined in advance is the cause of the hesitation of OpenAI and many other organizations.