Seeing the Round Corners

HEADS UP, the new day for Seeing the Round Corners “GOING LIVE” is Tuesday each week.

February 18, 2020

PART XXVII – FAKE NEWS AND THE SYNTHETIC MEDIA

The series on Deep Fakes took much longer than expected when it began, and from this writer’s viewpoint, incredibly more interesting and scarier.

The technology involved in creating deep fakes and deep fake videos is not totally new, just the technology that enables it is so much more sophisticated. With the present level of sophistication, individuals and businesses must be on guard for “novel forms of exploitation, intimidation and personal sabotage.”

The rise of cyber attacks in recent years and the attacks by foreign countries during the 2016 presidential elections will most likely escalate in 2020. Chesney and Citron, the authors of the Deep Fakes papers, wrote with the intent and hope of sounding an alarm on what they call a “much-magnified impact,” but “today’s social media-oriented information environment interacts with our cognitive biases in ways that exacerbate the effect still further.”

In this writer’s opinion, looking back over the 2016 election and with the 2020 election fast approaching, we haven’t even “scratched the surface of how bad fake news is going to get.”

Emergent technology for robust deep fakes has advanced from Photoshop which tweaked images in both superficial and substantive ways, something that “digital forensics has been grappling with and with the challenge of detecting digital alterations for sometime.”

Emergent generative technology capitalizing on machine learning promises to shift the balance from the earlier technology that was dependent upon the human eye to spot discrepancies. More realistic and more difficult to debunk than previously, forensic techniques are automated and less dependent upon the human eye.

The advance in technology created a neural network tool that “alters videos so speakers say something different from what they said at the time.” Advancing technology known as generative adversarial networks (GANs) will most assuredly lead to the production of more convincing and nearly impossible to debunk deep fakes.

The same technology to create deep fakes will be available for the bad actors, too – those with intentions of harmful use. Government’s efforts to safeguard the classified research in this area maybe ongoing, but “the volume and sophistication of publicly available academic research and commercial services will ensure the steady diffusion of deep-fake capacity no matter the efforts to safeguard it.” 

There was a time when such technology as that used for creating deep fakes was pretty much in the hands of trusted media companies, and the public could believe the news broadcasts. That is no longer true. The ease with which individuals can peddle deep fakes, such can quickly reach a massive, even global audience, with virtually no way of retraction.

The information that comes into play at this point, referred to as the “information cascade” dynamic, which is described as “the result of the human tendency to credit what others know, and everyday interactions involve the sharing of information. Unfortunately, at some point, people stop paying attention to their own information and rely too much on what they assume others know. The problem is compounded by sharing the information onward under the belief of having learned something valuable – “the cycle repeats and the cascade strengthens.”

Social media platforms are prime examples of the formation of information cascades spreading contents of “all stripes and quality.” The spread from there to traditional mass-audience outlets overcomes whatever gate-keeping that exists. Mobs such as Black Lives Matter and Never Again are formed on just such information cascades, but the information cascade dynamic does not account for distinctions that every mob is not smart or laudable.

Human nature also plays a role in how easily harmful deep fakes are accepted. Social science research has shown how more willing people tend to credit and remember negative information than they do positive information.

Social media researcher Danah Boyd’s explanation of how our bodies work – “programmed to consume fat and sugar because they are rare in nature – human bodies are biologically programmed to be attentive to things that stimulate content that is gross, violent or sexual, and gossip, which is humiliating, embarrassing, or offensive,” – consuming content that is least beneficial for ourselves and society as a whole.

Filter bubbles derive in this way, and can serve as powerful insulators:

  • human beings start the part of information with that which confirms their beliefs;
  • then, social media platforms supercharge this tendency by empowering users to endorse and re-share content;
  • platforms’ algorithms highlight popular information especially if it has been shared by friends, surrounding us with content from relatively homogenous groups;
  • as endorsements and shares accumulate, the chance for an algorithmic boost increases;
  • after seeing friends’ recommendations online, individuals tend to share them with their networks; and
  • because people tend to share information with which they agree, social media users are surrounded by information confirming their preexisting beliefs.

 

The influence of information contrary to our beliefs can run up against filters bubbles that serve as powerful insulators. Even Facebook users show that reading fact-checking articles who had not consumed the fakes news at issue, and those who consumed fake news in the first place almost never read a fact-check that might debunk it.

Chesney and Citron state:  “Information cascades, natural attraction to negative and novel information, and filter bubbles provide an all-too-welcoming environment as deep-fake capacities mature and proliferate.

The reader's comments or questions are always welcome. E-mail me at doris@dorisbeaver.com.