Seeing the Round Corners

July 29, 2019

PART V  – FAKE NEWS AND THE SYNTHETIC MEDIA

While the column last week and the continuing ones in coming weeks may seem to be more to the benefit of journalists, the typical reader should take advantage of what it offers.

Today’s world of information provides readers with a wide array not available decades ago before computers and the internet. This series of columns on Fake News and the Synthetic Media affords the typical readers a way of determining if on-air and print media are fake and/or synthetic.

How can you detect deep fakes -
The Wall Street Journal’s (WSJ’s) Media Forensics Committee is now “working on solutions and testing new tools that can help detect or prevent forged media. News organizations can consider multiple approaches to help authenticate media if they suspect alterations.” Here are the WSJ’s suggestions provided by Natalia V. Osipova, a senior video journalist, being used to detect deep fakes:  

  • employ technical ways to check if footage has been altered;
  • go through the footage frame by frame in a video editing program to look for any unnatural shapes and added elements; or
  • doing a reverse image search.

 

Ospiva offered this sage piece of advice about how to detect deep fakes:  “Reach out to the source and the subject directly, and use your editorial judgment.”

While such a tactic is most likely possible by a journalist/their organization, it gives the reader reason to question, if done, how so much fake news gets the air play that it does:

  • the source submitting suspicious footage should, upon receipt, first be contacted;
  • find out how the person obtained the footage and when and where it was filmed;
  • obtain as much information as possible;
  • ask for further proof of the claims;
  • the key is to verify all;
  • when video is online and uploader is unknown, additional questions should be
  • explored:  Who allegedly filmed the footage? Who published and shared it? and With whom?

 

Tools used to determine such information such as InVID or other metadata viewers can provide answers:

  • content verification organizations such as Storyful and the Associated Press are used by the WSJ in the process internally when examining the source;
  • authentication of photos is handled similarly by the WSJ using new tools such as TruePic and Serelay which use blockchain to do so; and
  • the WSJ is adamant and stresses this point – humans in the newsrooms are at the center of the process of examining the source.

 

The chief technology officer at the WSJ, Rajiv Pant, states, “Technology alone will not solve the problem,” and “The way to combat deep fakes is to augment humans with artificial intelligence tools.”

Finding older versions of the footage -
In olden days, finding possible older versions of a video to determine if any aspects of it have been manipulated was a daunting task. With today’s technology, reverse image search engines like Tineye or Google Image Search make such a task relatively simple as “deep fakes are often based on footage that is already available online.”

Examining the footage -
Final cut is an editing program that “enable journalists to slow footage down, zoom the image and look at it frame by frame or pause multiple times.” Such a process allows journalists to find obvious glitches such as “glimmering and fuzziness around the mouth or face, unnatural lighting or movements, and differences between skin tones which are telltale signs of a deep fake..”

Unnatural movements such as a shifting chin or a growing neck are facial details that show footage is faked. Additional clues may be small edits in the foreground or background of the footage. Care should be exercised to determine if an object may have been inserted or deleted into a scene that might change the context of the video. Sometimes it may be difficult to conclusively determine whether a video has been forged or not because of audio or video compression.

Hints that the audio may have been generated by artificial intelligence is discernible by watching out for unnatural information, irregular breathing, metallic sounding voices and obvious edits.  Other hints of creation by artificial intelligence are image artifacts, glitches and imperfections that can also be introduced by video compression. Determining whether or not a video has been forged can be difficult in the case of audio or video compression.

Next week, more on the techniques used to create deep fakes.

The reader's comments or questions are always welcome. E-mail me at doris@dorisbeaver.com.