Seeing the Round Corners

July 22, 2019

PART IV – FAKE NEWS AND THE SYNTHETIC MEDIA

During and since the 2016 presidential election, fake news has been a far greater problem than most Americans realize. Trust in the media (on-air and print) has been a valued tradition since the passage of legislation for freedom of the press and freedom of information.

The Wall Street Journal (WSJ) has taken on the threat of deep fake detection seriously. Misinformation came to the forefront during the 2016 presidential election. Verification poses a major challenge in the new type of synthetic media known as deep fakes, and is viewed as difficult to track, according to the WSJ.

To deal with the deep fakes and the misinformation it is capable of creating, the WSJ has formed the WSJ Media Forensics Committee (the WSJ Committee) led by the Ethics and Standards and Research and Development teams. The WSJ Committee is “comprised of video, photo, visuals, research, platform and news editors who have been trained in deep fake detection.”

The approach to dealing with the problem is to host training seminars with reporters, developing newsroom guides and collaborating with academic institutions such as Cornell Tech to identify ways technology can be used in deep fake detections. “Raising awareness in the newsroom about the latest technology is critical,” said Christine Glancey, a deputy director on the Ethics and Standards team who spearheaded the forensics committee.

What follows is an overview with details compiled by the WSJ Committee about deep fakes:

  • How are most deep fakes created;
  • Techniques used to create deep fakes;
  • How can you detect deep fakes;
  • Finding older versions of the footage;
  • Examining the footage;
  • The democratization of deep fake creation adds to the challenge;
  • Object extraction;
  • Weather alteration;
  • Artificial voices; and
  • Deep fakes’ ramification for society.

 

How are most deep fakes created -
Production of most deep fakes is based on a machine learning technique called “generative adversarial networks (GANs)”. Such an approach can be used by forgers to swap the faces of two people (say a politician and an actor). FYI, algorithm is defined as a set of unambiguous instructions that given some set of initial conditions, can be performed in a prescribed sequence to achieve a certain goal and that has a recognizable set of end conditions.

For deep fakes, the algorithm “looks for instances where both individuals showcase similar expressions and facial positioning. In the background, artificial intelligence algorithms are looking for the best match to juxtapose both faces.”

A class is now offered at NYU Tisch – “Faking News” – that exposes students to the dangers of deep fakes by teaching them how to forge content using artificial intelligence techniques. The course is meant to afford students studying the technology to not only “understand the potential implications but also limitations,” according to graduate student Chloe Marten, a product manager at Dow Jones.

Techniques used to create deep fakes -
Faceswap, lip sync, facial reenactment and motion transfer are a few of the techniques used to create deep fakes.

Faceswap is created when an algorithm is used to “seamlessly insert a face of a person into a target video. This technique could be used to place a person’s face on an actor’s body and put them in situations that they were never really in.”

Lip sync is a technique where forgers can graft a lip-syncing mouth onto someone else’s face, then combine the footage with new audio to make it look like they are saying things they are not.

Facial reenactment is a technique where forgers can transfer facial expressions from one person into another video, and allows the toying with a “person’s appearance to make them seem disgusted, angry or surprised.”

Motion transfer is a technique that involves how to transfer the body movements of a person in a source video to a person in a target video – the motions of a dancer can be captured and used to target actors to move in the same way.

The WSJ Committee emphasizes the important role journalists have in “informing the public about the dangers and challenges of artificial intelligence technology. Reporting on these issues is a way to raise awareness and inform the public.”

Next week, more on the techniques used to create deep fakes.

The reader's comments or questions are always welcome. E-mail me at doris@dorisbeaver.com.