Seeing the Round Corners

HEADS UP, the new day for Seeing the Round Corners “GOING LIVE” is Tuesday each week.

January 7, 2020


The world is now – finally – in the year 2020, which holds promise for the most volatile election in this country’s history.  It is sad to say, vicious may be more accurate.

Maybe a recap by category would be a good refresher on deep fakes and deep fake videos.

Beneficial Uses:  Education, Art and Autonomy.

Harmful Uses of Deep Fake Technology:

  • Harm to individuals and organizations through exploitation and sabotage;
  • Harm to society;
  • Distortion of Democratic Discourse
  • Manipulation of Elections;
  • Eroding Trust
  • Exacerbating Social Divisions;
  • Undermining Public Safety;
  • Undermining Diplomacy Jeopardizing National Security; and
  • Undermining Journalism.


The writers of Deep Fakes:  A Looming Challenge for Privacy, Democracy and National Security summarized the efforts to reduce opportunities for abuse by platforms and to allow platforms to be held accountable for deep fakes:

  • “The cabining features that are needed to control the scope of platform liability to ensure that this approach can be no more than a partial solution to the deep-fakes challenge. Other policy responses will be necessary.”


That old cliché “be careful what you wish for” hangs over much of the previous discussion and analysis about deep fakes and deep fake videos. Civil liability was previously covered as one means of the legal system discovering the creation and distribution of harmful deep fakes. Now for the other means  –  criminal liability. 

There is a difference in the deterrent effectiveness of civil liability and criminal liability. As example, being judgment proof protects one from the consequences of civil suite but it offers no protection from a prison sentence and the resulting consequences of criminal conviction. Limited resources and setting priority for use of funds affects aspects of how cyber stalking complaints are handled because state and local law enforcement lack training in the relevant laws and the investigative techniques necessary to track down online abusers.

There are several cyber stalking laws; federal cyber stalking law, 18 U.S.C 2216A, and analogous state statutes. Part 2661A of 18 U.S.C. states “it is a felony to any ‘interactive computer service or electronic communication system’ to ‘intimidate’ a person in ways ‘reasonably expected to cause substantial emotional distress’.”  Even without fear of bodily harm, cyber stalking victims have ‘their lives totally disrupted . . . in the most insidious and frightening ways’.” Such cases are punishable by 5 years in prison and fines up to $250,000.00 with repeat offenders and defendants whose offense violates a restraining order. 

Impersonation may also fit these criteria. Laws vary from state to state as the reader might imagine. It is criminal in several states to “harm, intimidate, threaten or defraud a person.” A portion of that statute – “harm, intimidate, threaten”  – somewhat follows the cyber stalking statute with certain jurisdictions holding creators of deep fakes accountable for criminal defamation of their posted videos knowing they were fake or if they were reckless as to their truth or falsity. Someone’s face in a violent deep fake sex video might support charges for both impersonation or defamation “if the defendant intended to terrorize or harm the person and knew the video was fake.”

Another type of harm caused by deep fakes is when one is distributed broadly across society – a deep fake calculated to spur an audience to violence. Some platforms ban content calling for violence, but not all do.

Turning now to the creator of a deep fake being prosecuted under a statute such as 18 U.S.C. 2101, the statute “criminalizes the use of facilities of interstate commerce, such as the internet, with intent to incite a riot.” The determining factor would be the incitement charges must comport with the First Amendment constraints identified in Brandberg, a well-recognized case, and if the speech in question was likely to produce imminent lawless action. Such criteria would leave many deep fakes beyond the law’s reach.

Elections, oh yes, those devilish things who fill the halls of Congress, state and local governments. While lieing by candidates for office is nothing new, deep fakes present troubling development. Using lies to impact elections has come to be a part of the democratic process –  criminalizing such has met with constitutional hurdles, and as Chisney and Citron say, “for good reason.”

Free speech scholars such as Helen Norton stated:  Laws forbidding lies ‘threaten significant First Amendment harm because they regulate expression in a contest in which we especially fear government overreaching and partisan abuse’.” In a case, Brown v. Hartlage, the State’s fear that voters might make an ill-advised choice does not provide the State with a compelling justification for limiting speech.” Unfortunately, it does not look hopeful for a change in attempts to ban election-related lies.

Chisney and Citron surmise that criminal liability is not likely to be a particularly effective tool against deep fakes that pertain to elections. Most likely, the only “capable actors” with motive and means to deploy deep fake in a high-impact manner in an election setting will include the intelligence services of foreign governments engaging in such activity as a form of covert action, as we saw with Russia in relation to the American election of 2016. Chisney and Citron went on to say, “criminal prosecution will mean little to foreign government agents involved in such activity so long as they are not likely to end up in U. S. custody. Did the world ever get a really straight answer about the Mueller investigation and foreign agents convicted.

Next week, more on criminal liability.

The reader's comments or questions are always welcome. E-mail me at