Seeing the Round Corners

HEADS UP, the new day for Seeing the Round Corners “GOING LIVE” is Tuesday each week.

March 3, 2020

PART XXVIII – FAKE NEWS AND THE SYNTHETIC MEDIA

Since the 2016 election, journalists have lost sight of what their job is; i.e., report the news from an impartial basis, and when the reporter is finished giving the news/report, the public should not be able to tell what side of the issue the reporter is on. The job of the reporter is to present in just that manner, not to persuade anyone to any “side.”

As the ability of creators of deep fakes broadens, news organizations may be somewhat reluctant to report really disturbing events, also known as “breaking news,” for fear of them being fake news, thus increasing the need for a quick and reliable way to authenticate video and audio. Without such, reluctance by the press may hamper its “ethical and moral obligation to spread truth.”

What may be classified as the other side of the coin is the use of dangerous lies that take the form of denials. Altered video or audio evidence created by a person accused of something can be used to rebut a claim, and deep fakes offering a way of escaping accountability for the truth.

As the growth of deep fakes and the threats they pose become more commonplace, an enlightened public will most assuredly doubt the authenticity of real audio and video evidence, especially by those unable or unwilling to gain the skills to determine authentic video, audio and deep fakes.

Probably one of the most detrimental results of the deep fake news is its “fueling” the truth skepticism – lack by the public in trusting what they see and hear from traditional news sources. The ultimate threat may be to erode “the trust necessary for democracy to function effectively.”

The risk of the public losing faith in what they hear and see means “truth becomes a matter of opinion.” That’s when authoritarianism benefits and objective truths lose their power. Cognitive bias also reinforces the dynamics of truth decay and trust decay, and people tend to trust facts that comport with their own pre-existing belief. As people gain knowledge and understanding of the existence of deep fakes, many may accept deep fakes rather than information that is in fact true. 

By now, readers may be wondering what protection is possible to protect them (and the world) from the harms of deep fakes and deep fake videos. Here are the possible responses:

  • technological solutions;
  • current and potential criminal and civil liability;
  • role of regulators;
  • ways the government using active measures might respond; and
  • market development of ways to protect individuals and the considerable threat to privacy such services themselves might entail.

 

The development of deep fakes and deep fake videos seems to keep one step ahead of available technology to “debunk” it. Such would go far if it existed and could be deployed through social media platforms as a way of reducing systemic harms as discussed throughout this series.

A time frame for development of such technology is uncertain. Until such is and the test is met for effectiveness by dominant platforms to incorporate them into their content-screening system (and making use of them mandatory for posting), existing technology will have limited effectiveness.

Development of generally applicable technology that can detect manipulation in content without an expectation that the content comes with an internal certification is still in the future. Some experts believe that it is decades away, with a defense “faring poorly at the moment in the deep-fake technology arms race.” (Dartmouth professor Harry Farid, the pioneer of PhotoDNA.)

Need for development of such technology may drive forces to more aggressively pursue the “current balance of power between technologies to create and to detect deep fakes.” Grants for development by agencies such as the National Science Foundation and the Defense Advanced Research Projects Agency will also drive the market.

New market forces may be incentives for companies to pursue such capabilities, but for now, technology alone offers little hope for a reliable solution to debunking the harms deep fakes might cause.

Typically, the potential harms to the general population discussed throughout this series on deep fakes and deep fake videos could or would be dealt with via criminal law or civil liability law. But, no current law or civil liability bans the creation or distribution of deep fakes.

Deep fakes fall into the category of causing significant harm in some contexts, but not in all. Clarity of digital content would be prevented if prohibition of deep fakes become a flat ban as would experimentation of a diverse array of fields from history and science to art and education.

The use of a deep-fake ban by government to censor unpopular or dissenting views is also a real possibility. The deep-fake technology can be used by those on both sides of the issue – good and bad actors, giving the bad actors a real weapon on the basis of the American free speech tradition.

Free speech advocates would oppose such a ban on the basis of a certain result. Justice Oliver W. Holmes wrote in a dissenting opinion (Abrams v. United States):  “Persecution for the expression of opinions is perfectly logical … [i]f you have no doubt of your premises or your power and want a certain result with all your heart.” Holmes opposed this certainty and wrote “power’s tendency is to sweep away disagreement, a principle of epistemic doubt that is a defining hallmark of First Amendment law.”

Constitutional challenge is another obstacle that would most likely fail in an effort to ban deep fakes. Believe it or not, false speech is protected!! In New York Times v Sullivan, (1964), the Supreme Court held that “false speech enjoys constitutional protection insofar as its prohibition would chill truthful speech.”

In another well-known case, United States v. Alvarez, Supreme Court Justices were unanimous on the view that lies that cause no real harm are protected speech unless those lies concern narrow categories of speech that are not covered by the First Amendment;” and “unanimous on the notion that lies cannot be punished if no harm results.”

 Next week, more of the deep fake and deep-fake video recap. 

The reader's comments or questions are always welcome. E-mail me at doris@dorisbeaver.com.