Seeing the Round Corners

HEADS UP, the new day for Seeing the Round Corners “GOING LIVE” is Tuesday each week.

December 17, 2019


The discussion on specific categories of civil liability concludes this week. When Seeing the Round Corners returns in January, criminal liability will be the subject at hand.

Sometimes the cliché “be dammed if you do and be damned if you don’t” is the only way to describe a legal situation that occurs when the question arose about whether Section 230 should be amended as to allow platforms to be held accountable for deep fakes. The way the courts have “stretched” interpretation of Section 230, a kind of immunity has evolved, among other things, one that “prevents the civil liability system from incentivizing the best-positioned entities to take action against the most harmful content.” 

What this morphs into, for understanding by us ordinary citizens is this:  The law’s over-broad interpretation means that platforms have no liability-based reason to take down illicit material and that victims have no legal leverage to insist otherwise.” This was so profoundly described in the George Washington Law Review by Rebecca Tushnet:  Section 230 ensures that  platforms enjoy “power without responsibility.” In other words, bluntly put, victims are screwed!

As the wide array of technology evolves, it also improves, or may be vice versa in certain respects. One of the authors of Deep Fakes . . . Danielle Citron and an associate, stated that “while ISPs and social networks with millions of postings a day cannot plausibly respond to complaints of abuse immediately, let alone in a day or two, they may be able to deploy technologies to detect content previously deemed unlawful.” 

Citron suggests that the amendment proposed to make platforms accountable for deep fakes be made conditional, in contrast to being automatic under the status quo. Clarifying that, it means the entity would have to take “reasonable steps to ensure that its platform is not being used for illegal ends.”

New laws must be “developed” when there is no precedent on the books. Common law development of a novel standard of care, which is what certain aspects of the proposal to make Section 230 conditional would fall under –  certain risks will arise without a doubt. Two possibilities:  opening the door to liability and to the prospect of runaway juries imposing massive damages.

The dangers of such arise:

  • drive sites to shutter (or to never emerge);
  • cause undue private censorship at the sites that remain; and
  • free expansion, innovation and commerce all would suffer.


To put a more narrow meaning on the concerns noted above, these possibilities may provide solutions: 

  • The amendment to Section 230 could include a sunset provision paired with data-gathering requirements that would empower Congress to make an informed decision on renewal. Data gathering should include the type and frequency of content removed by platforms as well as the extent to which platforms use automation to filter or block certain types of content. This could permit Congress to assess whether the law was resulting in over-broad private censorship akin to the excesses of a Heckler’s veto.
  • The amendment could include caps on carefully tailored damages.
  • The amendment could be paired with a federal anti-SLAAP provision, which would defer frivolous lawsuits designed to silence protected speech.
  • The amendment could include an exhaustion-of-remedies provision pursuant to which plaintiffs, as a precondition to suit, must first provide notice to the platform regarding the allegedly improper content, at which point the platform would have a specified window of time to examine and respond to the objection.


The authors of Deep Fakes (Chesney and Citron) arrive at this final analysis on the civil liabilities:

  • a reasonably calibrated standard of care combined with such safeguards could reduce opportunities for abuses without interfering unduly with the further development of a vibrant internet or unintentionally turning innocent platforms into involuntary insurers for those injured through their sites;
  • approaching the problem as one of setting an appropriate standard of care more readily allows differentiating between different kinds of online actors, setting a different rule for websites designed to facilitate illegality from that applied to large ISPs linking millions to the Internet;
  • the cabin feature (confining with narrow bounds) that are needed to control the scope of platform liability ensure that this approach can be no more than a partial solution to the deep fakes challenge.


These are not the only responses – others will be necessary.

With next week being the Christmas holiday and the following week New Years, Seeing the Round Corners will take a break and return on January 6. 2020 with Deep Fakes:   Specific Categories of Criminal Liability.

The reader's comments or questions are always welcome. E-mail me at