Seeing the Round Corners

HEADS UP, the new day for Seeing the Round Corners “GOING LIVE” is Tuesday each week.

March 24, 2020


Administrative Agency Solutions are quite limited in their roles for addressing deep fakes. The Federal Trade Commission (FTC), Federal Communications Commission (FCC) and Federal Elections Commission (FEC) are the most appropriate agencies.

The areas under the purview of the FTC are advertising of food, drugs, devices, services or cosmetics, unfair and deceptive commercial acts and practices under Section 5 of the Federal Trade Commission Act.

State Unfair Deceptive Acts and Practices (UDAP) are enforced by states attorney generals and seek civil penalties, injunctive relief and attorneys fees, and costs involving prohibited deceptive commercial acts and practices and unfair acts as well as practices whose costs exceed their benefits.

Conflicts arise due to the Section 230 protection of the Communications Decency Act, which makes it “difficult to sue an online platform based on its hosting of harmful content created or developed by others.”

What must be kept in mind is proper interpretation of the FTC’s jurisdiction – “. . .fake news stories that get circulated or planted or tweeted around are not trying to induce someone to purchase a product, it is to induce someone to believe an idea.” First Amendment concerns arise when a government entity attempts to distinguish real news from fake news.

At the present time, the FCC does not have jurisdiction over content circulated via social media, but this is subject to change. Congress may intervene or the FCC may reinterpret its position as concern escalates over fake news, incitement, radicalization or any number of other hot-button issues. The FCC’s present role as to deep fakes is limited to “potential efforts to deter their appearance on radio and television.”

The Federal Elections Commission has as its main focus financing, and “the main thrust of its regulatory efforts relating to speech is to increase transparency regarding sponsorship and funding for political advertising.”

Readers should be aware and remember, the FEC does not purport to regulate the truth of campaign-related statements, nor is it likely to assert or receive such jurisdiction anytime soon for all the reasons discussed previously in relation to the First Amendment obstacles, practical difficulty and political sensitivity of such an enterprise.

Currently, major online social media platforms are not subject to FEC jurisdiction in the context of advertising. It is without a doubt why other online advertising platforms have long resisted imposition of the FEC’s disclosure rules.

First, not all content about a candidate would meet the criteria to be classified as advertising, and second, “the FEC’s disclosure rules in any event are candidate specific and do not encompass generalized ‘issue ads’ that express views on a topic but do not single out particular candidates.”

The coercive responses available to the government have greater impact than civil suits, criminal prosecution and regulatory actions, and such responses impose significant costs on the perpetrators.

In future armed conflicts, deep fakes will play a role as information operations of various kinds have been an important aspect of warfare. Deep fakes provide both sides in a conflict with unlimited ways to undermine the credibility of either side. When location of the creator of the deep fake is in a third country, the freedom of action for a military response of any kind may be strictly monitored and limited by policy and by legal considerations.

Covert actions refer to government-sponsored activity that is meant to impact events overseas without the United States Government’s role being apparent or acknowledged. Take note, propaganda and other information operations can be and frequently are conducted as covert actions, and such offer advantages over military alternatives.

A prime example is that “covert actions do not require any predicate circumstance of armed conflict – presidents may resort to it when they wish – covert actions are not publicly acknowledged.” The second advantage to covert actions are they require compliance with the United States Constitution and statutes of the United States, but are conspicuously silent about international law.

Covert actions can be down right intriguing as often portrayed so ruggedly” and “entertainingly” on television and in the movies, but when deep fakes are used to harm the United States, covert actions may be used by the Unite States to deter or punish foreign actors who created or employed deep fakes.

Sanctions are one method of considerable leverage over foreign governments, entities and individuals. The International Emergency Economic Powers Act (IEEPA) established the framework for the executive branch to impose economic sanctions (backed by criminal penalties). The protocol for the President to impose sanctions is he must issue a public proclamation of “national emergency,”relating an “unusual extraordinary threat” having its source in whole or in part outside the United States. Whether or not there are grounds for a national emergency based on deep fakes generates numerous questions.

Could the regulation that allows for IEEPA sanctions apply to foreign hackers such as in the 2016 elections and also be applied to deep fake creators who post false information “in a way meant to impact American politics.” If so, a precedent would be set.

Setting a precedent is what appeared to happen when President Trump applied former President Obama’s amended Executive Order 13964 in 2015. The amended order expanded the prohibition to “. . .forbid foreign entities from using cyber-enabled means to ‘tamper’ with, alter or cause a misappropriation of information with the purpose or effect of interfering with or undermining election processes or institutions . . .”

The idea was muddied by a Treasury Department’s explanation of the Russian Internet Research Agency (IRA) inclusion with a reference to “misappropriation of information” and illegal use of stolen personally identifiable information.

Chesney and Citron recommend a new national emergency regulation that would address “attempts by foreign entities to inject false information into America’s political dialogue, without any need to show that such efforts at some point happened to involve hacking or any other “cyber-enabled means.”

Marketplace solutions, as opposed to government may respond to the deep-throat challenge in a couple of ways:  1) develop to sell services intended to protect customers from at least some forms of deep fake-based harms; and 2) some social media companies can on their own initiative, take steps to police against deep-fake harms on their platforms. Such efforts mean not only market advantage but also reasons including policy preferences and possibly what a contrary course might mean from legislative intervention.

Credible alibis will be one valuable way of proving a person said/did not say, or did/did not do certain things or be/not be in certain places. Such proof could be in the way of immutable life logs or authentic trails that could enable a victim of deep fakes to produce a certified alibi.

Beware, advances in a variety of technologies – wearable tech, encryption, remote sensing, data compression, transmission and storage, and blockchain-based recording-keeping – will make the immutable life logs or authentic trails possible, but of interest to only a certain segment of the population – politicians, celebrities, and others whose fortune depend to an unusual degree on fragile reputations.

Surrender of privacy by those mentioned above would be profound, and by some experts, “it risks what has been called the ‘unraveling of privacy’ – the outright functional collapse of privacy via social consent despite legal protections intended to preserve it.” The resulting consequence maybe the “more people relinquish their privacy voluntarily, the remainder increasingly risk being subject to the inference that they have something to hide.” Think long and hard about that!

The other notable drawback to immutable life logging is the position of power the provider of such services finds themselves in. Such a provider would be in possession of a database of human behavior of unprecedented depth and breadth, or a “database of ruin,” as Paul Ohm (author of Broken Promises), put it.

The Terms-Of-Service (TOS) agreement platforms operate under express and establish the content screening-and-removal policies of the platforms. Said agreements “determine if speech on the platform is visible, prominent or viewed, or if instead it is hidden, muted or never available at all.” TOS agreements will be primary battleground in the fight to minimize the harms that deep fakes may cause.

TOS agreements already ban certain categories such as impersonation (Twitter) and non-consensual pornography (Google), and should be clear about their speech policies. Deep fakes in the categories of satire, parody, art or education most likely should not be suppressed.  

This series of columns on deep fakes and deep fake videos has been long and tedious to write in language reasonably understandable to the reader. Hopefully, it has presented information demonstrative to cause readers to be cautious about trusting all that appears on the internet during the upcoming presidential election.

The reader's comments or questions are always welcome. E-mail me at