Seeing the Round Corners

HEADS UP, the new day for Seeing the Round Corners “GOING LIVE” is Tuesday each week.

Marcy 10, 2020


In order to impose civil liability for a deep fake or deep fake video, the plaintiff must be able to identify the creator which is not always possible, nor is it possible to track the various posts on social media where other people can access the post(s). The result then is seeking a remedy against the platform (Facebook, Yelp,) publishing the deep fake.

An “obstacle” to further civil liability is inability to reach the creator of a deep fake or deep fake video located outside the United States or a jurisdiction where local legal action is “unlikely to be effective.” Such is the nature of online platforms but also is especially so with deep fake context.

Finally, for most of “us” ordinary Americans, the cost of civil litigation is beyond their means, not to mention the possibility of embarrassment or being reputationally harmful, no doubt adding to the victim’s harm.

Two areas of law – intellectual property and tort law – provide a victim a possible option to sue the creator of a deep fake. Intellectual property as an example, copyright law is one area exploited by deep fakes. The tort “right of publicity” is another area of law that can be pursued against the creator of a deep fake, but fair warning, there is not a lot of case law on the tort “right of publicity.”

As Chesney and Citron state:  the harms associated with deep fakes do not typically generate direct financial gain for their creators. The case may arise where a deep fake is posted and goes viral creating violence in a community resulting in harm and a victim sues the creator, but cannot locate that creator. So then who do you the plaintiff go after? The platform that posted the deep fake. Read on.

The threshold for recovery in such a suit is quite high and not very likely when suing the platform which posted the deep fake – Facebook, Yelp and others. Suing a platform is a very complicated undertaking and mind boggling to put it mildly. The type of platform – content or online platforms – enjoy protection in place via Congress passing Section 230 of the Communications Decency Act (CDA), the premise making it difficult to sue an online platform because it hosted harmful content created by others (except federal criminal law, the Electronics Communications Privacy Act and intellectual property law).

The harmful content brings up another obstacle to suing the platform. Section 230 of the CDA gives an online platform a shield from liability. A civil lawsuit against a platform for filtering or blocking problematic content is expressly prohibited by Section 230. This prohibition is classified as a “disincentive to self-regulation that liability otherwise might produce.”

“Super-immunity” arose out of the CDA drive to protect First Amendment values, even to the point of Courts granting immunity when the provider republished content knowing it violated the law; solicited illegal content while ensuring that those responsible could not be identified; altered their user interface to ensure that criminals could not be caught; and sold dangerous products.

State attorneys general and victims benefited in fighting sex trafficking cases with Congressional action in 2018 covering immunity provided by Section 230 of the CDA, and whether it can be used as protection against online platforms and deep fakes is undergoing aggressive debate in Congress.

Of course, there are two sides to this type of situation. Congress is just as concerned about going too far in imposing liability on platforms causing some to shutter or never open causing free expression, innovation and commerce to suffer under this scenario.

Criminal liability has a way to go before online abuse is aggressively pursued. Being judgment proof as a civil liability matter is not as much a deterrent as in a criminal liability case and being sent to prison and the other consequences of criminal conviction.

Law enforcement is not that up-to-date in pursuing cyber-stalking complaints and does not have as a priority the resources to pursue cases. Impersonation crimes, criminal defamation, impersonation and defamation are examples of crimes against individuals that may fit under the federal cyber-stalking law, 18 U.S.C. 2261A, as well as analogous state statutes.

Deep fakes may also be distributed broadly across society, such as when done to spur an audience to violence. Some platforms ban content calling for violence, but some do not. Incitement charges must be in compliance with First Amendment constraints, identified in case law set forth in Brandenburg, and likely to produce eminent lawless action. And yes, some of all this “wiggle room” stuff is due to the public’s fear of government overreaching and partisan abuse. That’s the Court’s reluctance to limit speech and struck down attempts to ban election-related lies (lies by candidates running for office).

The covert action of a foreign government is the only intelligence service likely to employ deep fakes in a high-impact manner such as in the 2016 presidential election, unless such foreign agent is likely to end up in custody in the United States or plan to travel abroad.

Next week, more on criminal liability.

The reader's comments or questions are always welcome. E-mail me at