Seeing the Round Corners

HEADS UP, the new day for Seeing the Round Corners “GOING LIVE” is Tuesday each week.

February 11, 2020


Today, the final discussion on Market Solutions of the Deep Fakes:  A Looming Challenge for Privacy, Democracy and National Security paper.

Speech Policies of Platforms represent at this point in the deep fakes and deep fakes videos technology, probably the “most salient response mechanism” to all presented in this series of columns. “The content screening-and-removal policies of the platforms themselves, are expressed and established via their terms-of-service (TOS) agreements.”

According to Chesney and Citron (authors of the Deep Fake paper), state “TOS agreements are the single most important documents governing digital speech in today’s world in contrast to ages past with the legal architecture for control of traditional public fora tended to loom largest.” “Today’s most important speech form, for better or worse, are the platforms.” “TOS agreements determine if a speech on the platform is visible, prominent, or viewed, or if instead is hidden, muted, or never available.” Important:  “TOS agreements thus will be primary battlegrounds in the fight to minimize the harms that deep fakes may cause.”

Platforms are aware of their responsibility in what subject matter to be banned pursuant to their terms-of-service agreements:  “Twitter has banned impersonations without regard to the technology involved in making the impersonation persuasive”; “Google’s policy against non-consensual pornography now already applies to deep fakes of that kind.”

Technological due process as it relates to deep fakes “requires companies to be transparent – not just notionally but in real practical terms – about their speech policies.”

It should be pointed out that “speech policies such as those that constitute satire, parody, art or education discussed previously should not normally be suppressed.”

Sometimes, it seems American justice over does the idea of fairness. According to Chesney and Citron, “Users of platforms would be notified that their (alleged) deep-fake posts have been removed (or muted) and given a meaningful chance to challenge the decision.”

Given time, there is significant risk that growing awareness of the deep fakes threat “will carry with it bad faith exploitation of that awareness on the part of those who seek to avoid accountability for their real words and actions via well-timed allegations of fakery.”

Technological due process brings to the forefront “the challenge of just how platforms can and should identify and respond to content that may be fake.  For now, platforms must rely on users and in-house content moderators to identify deep fakes.”

Crucial to technological due process is the choice between human decision-making and automation. Chesney and Citron state, “Exclusive reliance on automated filtering is not the answer, at least for now, because it is too likely to be plagued both by false positives and false negatives. Automatic filtering may have a useful role to play in flagging specific content for further review by actual analysts, but normally should not serve as the last word or the basis for automatic speech-suppression action (except where content previously has been determined, with due care to be fraudulent, and software detects that someone is attempting to post that identical content).” 

Facebook is one of the largest, among others, recognizing the present problem of deep fakes and is beginning to take steps to respond. Facebook also has plans to emphasize video content to a growing degree and has stated it will begin tracking fake videos. Also underway is emphasizing videos from verified sources and showing less emphasis on ones from unverified sources. There may be some cost to the ability of anonymous speakers to be heard via that platform.

Next week, recap of the deep fake series.

The reader's comments or questions are always welcome. E-mail me at