For some time now, the false messages or deep fakes that travel the networks have been sometimes harmless and just part of a bad joke, but can cause big problems. For this reason, certain companies like Facebook choose to implement certain technologies that help identify these types of messages.
Zuckerberg announced today that it is working on new ways to combat these false messages by working with third parties called “verifiers”, made up of 27 partners based in 17 countries around the world. These have the mission to confirm the legitimacy of this message in order to find this kind of false information.
However, we have seen how this message has evolved from a simple text of meaning to videos and/or photos that are well structured and seem real at first glance with specific characters, such as the latest DeepFakes from porn actresses.
For this reason, Facebook is now presenting a plan against this new version of false messages and has developed an automatic learning model that uses various “interaction signals”, including user reports to identify potentially false content.
“We are working on new ways to detect if a photo or video has tampered or not. These technologies will help us identify more potentially misleading photos and videos that we can send to data inspectors for manual review.
This automatic learning goes along with a tactic called OCR or optical character recognition, that helps to recognize and extract the text from the photos and then compare it with the headlines of the analyst articles.
According to the company, these “interaction signs” include comments from people on Facebook, and the company sends these photos and videos to data reviewers for review. In addition, the company adds that its external fact-check partners have experience in searching for false information and also have training in visual verification techniques such as reverse image search and image metadata analysis.
These expert inspectors are listed to assess the truth or falsity of a photo or video by combining these skills with journalistic practices, such as research by experts, academics or government agencies.
The company has divided this false content into three categories:
- Manipulated or produced.
- Torn out of context.
- Claim to text or audio.