Back to top
Submitted by Bastiaan Vanacker on Thu, 02/21/2019 - 14:18

As 2018 lies firmly behind us, CDEP Program Director Bastiaan Vanacker takes a look at some of the major digital ethics and policy issues of the past year that will shape the debate in 2019.

January: Crucial Court Win for Section 230

Often credited as the law that “made the web,” section 230 of the Communications Decency Act (“section 230”) shields internet service providers from legal liability for what third parties post on their platforms or networks. For example, if a student were to falsely write on a professor review site that Professor Vanacker has a habit of drinking scotch in the classroom, I would not be able to successfully sue the review site. Even if the site owners did not verify the information or refused to take this info down after I showed it to be false, section 230 would preclude me from successfully suing them. The only person I could potentially sue would be the anonymous poster (provided I could find out who it is), which might be more trouble than it is worth.

Section 230 was passed in 1996 after lawmakers grew concerned about a court verdict that held a financial bulletin board responsible for a libelous comment made by one of its users. Since moderators edited the posts, the court argued it had editorial control over the content and therefore could be held liable. Lawmakers thought this ruling sent the wrong message as it created an incentive not to moderate the content of online communications by creating liability for those who tried to regulate content. Consequently, they enacted a Good Samaritan law of sorts that shields intermediaries from legal liability in these instances.

Section 230 stems from the notion that, just as a library cannot be held responsible for a libelous statement contained in one of the publications it houses, internet providers should not be held responsible for what takes place on their platforms. The law has been generous towards social media networks and web sites in protecting them from a wide range of civil suits, excluding federal criminal law, intellectual property law, and electronic communications privacy law.

Some think the law has gone too far. For example, when a Cincinnati Bengals cheerleader sued TheDirty.com for an anonymous post suggesting that she had had intercourse with half of the football team and had contracted STDs in the process, her lawsuit failed because of section 230. Even though the web site actively solicited this type of unverified gossip, and added commentary to the posts, it found protection under section 230 when the 6th U.S. Circuit Court of Appeals overturned a lower court’s ruling.

On the other hand, some have argued that no line of federal code has generated more wealth and prosperity than section 230. They contend that this law has enabled the tech industry’s boom, by allowing companies to dedicate their resources to innovation and development, rather than to compliance and legal fees.

But section 230 has been chipped away at; last April Congress enacted a law that excluded from section 230’s protection certain activities linked to human sex trafficking and prostitution. In other words, sites can now be held civilly and criminally responsible for content posted by its users that promotes or facilitates prostitution and sex trafficking. Critics have argued that this would result in sites shutting down parts of their platforms rather than policing them, and some fear this is the beginning of the end for section 230.

However, while this legislation was pending, section 230 netted an important victory last January when a U.S. district court in California ruled that AirBnb could not be held liable for renters violating their lease agreements by subletting their units on the platform. An L.A. housing company sued the online home rental service because it allowed this practice to continue, profited from it, and refused to reveal the names or addresses of the renters subleasing their units. The court ruled in AirBnb’s favor, granting it protection under section 230.  

How the courts will interpret section 230 and whether or not lawmakers will further limit its scope will be crucial policy issues for years to come.

February: Embedding a Tweet Might Violate Copyright

One of many yet-to-be resolved issues in digital copyright law is whether or not one can use images from social media platforms in news reporting. Since these images are copyrighted, the question is whether or not this practice would be covered under fair use. There is not enough case law available to answer this question with certainty.

However, until recently, embedding content was a way that these copyright concerns could be side-stepped. When one embeds content, for instance a tweet, one does not make an actual copy of the tweet. Instead a line of code is inserted instructing a user’s browser where to locate the image to be displayed, in this case on Twitter’s servers. Since the creator of the site never made a copy or does not display it on her site, there is no copyright infringement and the fair use issue is not even reached.

From a copyright perspective, this practice of inline linking, where a line of code links to an object or image hosted on another site, was generally considered to be safe, as courts applied this “server test” to copyright questions. As long as you only inline link and don’t make or host a copy on your server, a judge applying the server test would not consider this a “public display” of a copyrighted work.

This test was adapted in 2007 by the 9th Circuit in a case in which a pornographic site sued Google for copyright infringement for displaying images of the site through image search.  According to cyberlaw expert Eric Goldman, this ruling, as well as a ruling from the 7th Circuit, “dating back over a decade and with minimal conflicting precedent, led to many settled expectations and the proliferation of in-line linking.”

However, in February, a giant asterisk was added to this general rule, as a New York district court ruled that inline linking of a social media post could constitute an infringing public display of a copyrighted work. The case stemmed from a Tom Brady-related Snapchat post that consequently made the rounds on Twitter, leading to numerous media organization embedding the tweet in stories they posted about the matter. (The post contained newsworthy information.)

The February ruling merely answered “yes” to the question of whether or not embedding the tweet constituted a “public display.” The court could still rule in later proceedings that this use was protected by fair use or accept other defenses from those defendants who did not settle. However, even if defendants were to prevail on those defenses, this ruling still means that inline linking is much riskier in the states covered by the Second Circuit than was previously thought.  In the summer of 2018, an appeal was denied, leaving the applicability of the server test in instances of embedding social content very much in doubt.

March: Can The President Block Twitter Users?

In March, oral arguments were held in a case questioning whether the president could block Twitter users from his account under the First Amendment. The case was brought by seven citizens who were blocked by President Trump after replying critically to his tweets. Represented by the Knight First Amendment Institute, the plaintiffs argued that the space on Twitter where users can interact with the president’s tweets is a government-controlled space. Two months later, the U.S. District Court for the Southern District of New York issued an opinion agreeing with the plaintiffs.

Applying the forum doctrine, the court argued that the part of the president’s account where people can interact with the president is a designated public forum. It is a long-standing principle in American jurisprudence that government officials cannot deny access to a forum designated for public discussion on the basis of viewpoint. The case is currently being appealed.

Government officials taking to privately-owned social media platforms to communicate with their constituents should realize that the nature of social media allows for feedback, reactions and criticism. And if government officials open themselves up to reactions, they cannot then selectively close themselves off from them.

Government officials would probably not be held responsible if a social medium deleted a reaction to one of their posts because it violated its own policies, even if that speech would be protected by the First Amendment. Since the government officials do not exercise control over the social medium’s acceptable uses policies and their enforcement, they could not be held accountable for a platform’s decision to delete a reaction or an account. But in this case, Twitter did not have a problem with the tweets, the president did.

There are currently a number of lawsuits stemming from public officials blocking people from their social media accounts. Public officials should realize that they cannot use their accounts as a pulpit to preach to the converted, without expecting some amount of criticism and dissent.

At the center of cases like this one is the question who truly controls these accounts. The government entity’s control will always be limited and easily overridden by the social media platform. Whether or not this amount of control qualifies these accounts as (designated) public forums will be an important question for courts to answer.

April: The Changing Narrative Surrounding Facebook

In April, a contrite Mark Zuckerberg received a symbolic lashing from senators and congressmen during his testimony in the wake of the Cambridge Analytica scandal.  This was only the beginning of what turned out to be an annus horibilis for the social media giant, during which a steady drumbeat of media exposés called Facebook’s ethics into question.

Zuckerberg’s standard line of defense in these type of PR crises (when denial is no longer a credible option), is to paint the company as a victim of its own innovativeness. To paraphrase:  “When charting new territories and connecting billions of people in a way that has never been accomplished in human history, mistakes are unavoidable, even by a company who aims to do the right thing. Let’s learn from them and then try to do better next time.”

From an ethical point of view, I have always found this a cop-out. Being a trailblazer requires a higher standard of ethical care and does not justify the trial-and-error approach that Silicon Valley seems to embrace at times. Risking exposure to significant harm should not be the price we pay for technological progress. But worse for Zuckerberg is that even this shaky defense seems to have run its course. Media reports have told the tale, not of an earnest company making a ham-fisted mistake from time to time, but of a cynical powerbroker whose main concern is in finding new ways to monetize its users’ data. Changing this narrative will be the main challenge of the company for 2019. Deciding which narrative to believe will be the main challenge for its users for 2019.

May: The GDPR Goes into Effect

Those advocating the need for stricter privacy laws, often positively reference  the European approach to regulating privacy, which has been much more hands-on than in the United States. In 2014, for example, the European Court of Justice ruled that EU data protection laws recognize a right to be forgotten, requiring search engines to grant requests to delist information pertaining to individuals that is no longer relevant. Another privacy mile stone was reached on May 24,

when the General Data Protection Regulation took effect in Europe, providing an update to the Data Protection Directive from 1995.

The law strengthens the provisions of its predecessor by requiring unambiguous consent from data subjects before collecting their data, which can only be used for specific purposes. Secondary uses or selling of data also cannot occur without permission, and European citizens can demand that their information be corrected or deleted. The law also comes with teeth in the form of steep fines that can run up to 20 million euro or 4% of a company’s annual sales, whichever is greater.

What the full effect of the law will be remains to be seen. Critics have pointed out that the high cost of compliance could stifle innovation, particularly in the field of A.I. There’s also some degree of confusion about how the law will be applied and enforced. On the other hand, the experience of recent years on this side of Atlantic has made it amply clear that free market mechanisms fail to provide adequate privacy protections.

In the wake of the many privacy scandals, American lawmakers are pondering federal privacy regulation. If a bill passes, which is by no means certain, silicon valley’s economic concerns are likely to weigh heavier on the draft than in the case of the GDPR, which reflects a strong German pro-privacy stance. How federal and state lawmakers balance the hard-to-ignore calls for tougher data protection laws with the technology sector’s resistance to burdensome regulations will be the defining feature of any new (potential) privacy legislation in this country.                                      

The GDPR Goes into Effect

 

June: Digital Forensics Help Authorities to Identify Leakers

In June, former Air Force linguist and NSA contractor Reality Winner pleaded guilty to leaking a classified document about Russian interference with the election to The Intercept.  Two months later, she would be sentenced to 63 months in prison, the longest sentence ever imposed for such an offense. As the Trump administration continues its war on leakers, this case serves as a reminder for journalists and leakers alike to cover their digital tracks. As I wrote at the time of her arrest, there are numerous indications that the carelessness of Winner and The Intercept all but revealed her identity to investigators.

Last October, Terry James Albury, a former Minneapolis FBI agent was sentenced to a four-year prison sentence for leaking national security information. Again, the recipient of the leaks was The Intercept. While the media outlet was not widely criticized this time around for the role it played in the arrest, investigators were able to zoom in on Albury by comparing the leaked documents posted on the Intercept’s website with the logs of Albury’s accessing these documents on the bureau’s computer system. (For more details, check pages 8-11 of the search warrant affidavit (pp. 14-18 in the document.))

The hunt on leakers was not started by President Trump. It is well-documented that under President Obama more leakers were prosecuted than under all previous administrations combined. This has led many to question why Obama was less tolerant to leakers than other presidents. But that may be the wrong question to ask and we should wonder instead why his administration was so much more successful in identifying leakers to prosecute. Washington lawyer Mark S. Zaid provided a possible explanation in the Washington Post:

"The Obama administration was able to prosecute more leakers because technology is a double-edged sword, making it both easier to leak vast amounts of classified information and to trace those leaks directly to their sources. This, combined with often poor tradecraft by leakers or journalists, means it is now much simpler for the government to collect tangible evidence permitting prosecution in ways that were previously unavailable."

As there is little sign of the Department of Justice backing off from prosecuting leakers, President Obama’s record number of leak prosecutions might be broken by the current administration. Moving forward, journalists and sources alike will have (to continue to) use digital technologies to cover their tracks in order to change the tide.

July: #Planebae and the Limits of Privacy

It was the feel-good internet story of the summer that quickly turned dark. Two attractive young people end up on plane next to each other and seem to hit it off, talk about their mutual love of fitness, share family pictures, and chat about their moms. And at the end of the friendly plane ride they leave the airport together. How do we know? Because co-passenger Rosey Blair, whose request to swap seats had brought the two together in the first place, live-tweeted the whole encounter. She snapped pictures (though she made their faces unrecognizable) and provided updates to a rapidly growing number of followers (“Breaking. They both left for the bathroom at the same TIME.”)

Once the nation had regained its collective breath, and after the woman involved in the encounter expressed her displeasure at the exploitative and privacy-invading nature of this narrative, questions about ethics emerged. When is it ok to take pictures in semi-public spaces? Can you live-tweet without permission of subjects? Earlier this year, I discussed the ethical components of this episode, but what about the legal issues? Sandy Davidson from the University of Missouri discussed some of the legal angles of this story for TWiT network.

There are no indications that the woman in this story actually sued, and if she had, she would probably have faced an uphill battle. Ultimately, Blair’s behavior would probably not be considered outrageous enough to meet the standards for intentional infliction of emotional distress. She also couldn’t be held responsible for the fact that the woman on the plane had been doxed and harassed online by third parties.

Had Blair recorded the conversation, this might have constituted a violation of the Federal Wiretap Act, which requires that all parties to a conversation consent before it can be recorded. Since this happened in federal air space, federal law might have applied (most states have less stringent requirements, requiring that only one party to the conversation is aware of the recording).

However, the law only applies to conversations in which one has a reasonably expectation of privacy, and it could be argued that this standard is not met for a conversation in an airplane that can be overheard by people in the surrounding rows. For the same reason, an intrusion upon seclusion privacy law suit might not succeed, Blair did not intrude into a private space.

Another generally recognized privacy tort, publication of private facts, also would not have been applicable as the information that was being divulged was not private enough for its disclosure to meet the standard that it is “offensive to a reasonable person.” Had Blair tweeted intimate revelations she overheard about their personal lives instead of fitness tips, this analysis might be different.

A libel suit also would have little chance of success since a plaintiff would have to show that the information was false and defamatory. Assuming for a minute that the info was false, implying that someone is flirting with someone is unlikely to be considered defamatory (i.e. lowering someone’s standing in a community). Had Blair falsely implied that the pair had sex in the bathroom, for example, then a libel suit could have been successful. (But in that case our hypothetical plaintiff would have to show that people in her community still recognized her despite the fact that Blair had made efforts to make her unrecognizable in the pictures.)

In all likelihood, a law suit would not have provided the woman with much redress, and that might not necessarily be a bad thing. It could have a chilling effect on speech if someone who slips up on social media is hauled into court. But this does not change the fact that our laws, especially common law, have been developed in a pre-digital era, when it could be assumed that most areas were free of recording devices.

As the ubiquity of recording devices is reaching its saturation point, it can be argued that we no longer have an expectation of privacy in semi-public places. At this point in time, should we not expect that wherever we go, someone will be recording on their phone, tablet or any other personal digital device? While this reality seems to necessitate stronger privacy protections, it also seems to raise the bar for privacy plaintiffs to establish that they had a reasonable expectation of privacy in the first place. The less privacy we have, the less it can be invaded.

August: Libel Suit Against Alex Jones Goes Forward

August marked an important victory for parents and family of several Sandy Hook shooting victims when a judge in Texas allowed their libel suit against Alex Jones law to go forward. The parents sued Jones for peddling the conspiracy theory that the shooting never took place and that all the people involved were actors working for some behind-the-scenes dark force.

As a result, numerous parents received death threats, causing some of them to have to move residences several times. Law suits were filed in Connecticut (where the shooting took place) and Texas (home of Inforwars). The Texas law suit survived a motion to dismiss last August, the parents suing in Connecticut have not yet cleared that hurdle but were allowed access to Inforwars’ internal marketing documents this January.

Jones’ lawyers have argued that his statements were rhetorical hyperbole rather than statements of facts. Indeed, believability is usually an important factor in libel cases and plaintiffs are required to establish that a reasonable person would take these statements as true. Ironically, an important defense of Jones would be that his audience does not fit that description.

It is far too early to predict how these suits will play out in court, assuming they advance and no settlement would be reached. The parents of Sandy Hook victims would make for sympathetic plaintiffs in front of a jury, so lawyers for Jones might very well advise their client to settle. Regardless of the outcome of this case though, it is clear that libel suits have emerged as powerful tools to combat certain types of extreme speech online.

September:  Libel Suits Target Presidential Tweets

In September, lawyers for the president were busy yet again, fending off libel suits from women whose veracity he had questioned following accusations and revelations they made pertaining to him. Adult film star Stormy Daniels and her colorful lawyer Michael Avenatti dominated the headlines in their effort to procure a favorable judgement, however their case was dismissed in October by a federal judge in California.

The suit stemmed from a claim by Daniels that she had been threatened back in 2011 when she was about to come forward concerning her alleged affair with the current president. In a tweet, Donald Trump called the claims, as well as the drawing she had made of the man who had allegedly threatened her, a “con job.” In dismissing the law suit, the court ruled that the tweet was “a hyperbolic statement against a person who has sought to publicly present herself as a political adversary to him.” Even if Daniels could have the verdict overturned, she would –as a public figure- then still have to prove that Trump knew that his statement was false, which would require her to establish that Trump knew the intimidation had in fact happened. A steep hill to climb.

In the shadow of the Stormy Daniels case, Summer Zervos has had more success in moving forward with her case against the president. The former “The Apprentice” candidate has accused Trump of groping and kissing her on two different occasions, upon which the president called her claims and those of others similar to hers “total lies” and “made up nonsense to steal the election.” Zervos has argued that these denials portray her as a liar and are therefore libelous. The case is currently unwinding in New York state court with lawyers for the president arguing that a sitting president cannot be sued in state court. As the Tweeter in charge continues to use his twitter account to disparage opponents and critics, it would undoubtedly a relief for his legal team if a court followed that argument.

October: A Supreme Court Case Rankles Silicon Valley

As social media platforms collectively kicked Alex Jones off their platforms last summer, some critics invoked the First Amendment to label this move as an attack on freedom of speech. Others were quick to point out the flaw of this censorship argument: social networks are privately-owned enterprises free to decide what speech to ban from their digital spaces. However, last October the Supreme Court decided to hear a case that could up-end this analysis.

At first sight, the case does not seem to have anything to do with social media, as it evolves around the question of whether or not public access television networks should be considered state actors. The case stemmed from a pair of videographers who claimed that they had been barred from a public-access TV network because it disapproved of the content of their program. While such actions would be perfectly legal if done by private actors, under the First Amendment government actors may not restrict speech on that basis.

The network in question is privately-owned under a city licensing agreement.  Reversing a lower court, the U.S. Court of Appeals for the Second Circuit ruled that privately-owned public access networks are public forums, as they are closely connected to government authority. As a result, owners of these forums are considered government actors bound by the First Amendment when regulating speech, the court ruled.

It is quite possible that the Supreme Court will issue a narrow ruling that only applies to the peculiar and particular ownership structure of public access TV. However, if these types of networks were to be considered a public forum, an upholding of the decision in a broader ruling could have significant consequences for social media companies’ ability to regulate speech on their networks. In that case, only speech that runs afoul of the First Amendment could be removed from their networks and the government could at the same time dictate certain rules for content moderation. While the chances of the Supreme Court issuing a ruling broad enough to have this consequence seem slim, the mere possibility is sure to make this case one that will be closely watched.

November: Neo-Nazi Gets Sued for Online Hate Campaign

What is the difference between incitement and a threat? At first glance, the answer seems straightforward. Incitement requires that a speaker tells other people to engage in an illegal action, while a threat requires that a speaker credibly communicates to an individual an intent to harm that individual.

Both types of speech are not protected speech. Incitement is illegal because it leads to illegal actions that can harm someone’s safety. Threats are illegal because they put people in fear for their physical safety (which is why there is no requirement that the sender intends to execute the threat). But on the Internet this distinction is not always clear. What if a person posts the home addresses of staff from abortion clinics online, crossing out the names of those who have been killed? Does it matter if the poster claims to have just wanted to act as a record keeper?

Or what if an extremist Muslim group puts the names and work addresses of the creators of South Park on a site, after they created an episode mocking the prophet Mohamad? Does it matter that they claim they only wanted to “warn” them, if their message is accompanied by a picture of a slain filmmaker, killed by an extremist after being accused of mocking Islam?

In those instances, the messages are a mixture of threats and incitements. They are threats because they put the intended targets in fear of their lives, but at the same time, the senders do not communicate any intention to commit an act of violence. They merely suggest that others might/should commit these acts, rendering them more incitement than threats.

However, ever since Brandenburg v. Ohio (1969), the standard that needs to be met to establish incitement is that the illegal action advocated in the speech is “directed to inciting or producing imminent lawless action” and that it is “likely to incite or produce such action.” This is a high bar to be cleared, particularly for mediated Internet speech, where speakers are rarely in close proximity to one another and where there is often a time lapse between sending and reception of the message. It is unlikely for an online statement to meet the definition of incitement.

Consequently, speech that appears to be online incitement often is treated as a threat or intimidation. Take for example the case of Tanya Gersh, a Jewish woman from Whitefish, MO who found herself in the cross hairs of a “troll storm” by the neo-Nazi site the Daily Stormer. She had drawn the ire of its founder, Andrew Anglin, after the mother of white nationalist Richard Spencer accused Gersh of strong arming her into selling her property in Whitefish because of her son’s radical politics.

Through the Daily Stormer, Anglin called on his followers to contact Gersh and to tell her what they “thought about her,” resulting in Gersh and her family being bombarded with vicious anti-Semitic hate messages. Some of these messages clearly constituted illegal threats, but they came from anonymous senders, not from Anglin, who had warned his followers not to engage in threats of violence.

Gersh nevertheless sued Anglin for invasion of privacy, intentional infliction of emotional distress, and violations of Montana's Anti-Intimidation Act. In November, a federal judge denied Anglin’s motion to dismiss the lawsuit on First Amendment grounds. How the Anti-Intimidation Act (essentially an anti-threat statute) will be applied to this case will provide further guidance on the applicability of anti-threat statutes to these types of online incitement.

December: Tumblr Bans Adult Content

In December, Tumblr’s ban on pornography took effect. The ban was rumored to have been precipitated by Tumblr’s removal from the Apple store due to the presence of child-pornography on its network. Barring all adult content might be more convenient than policing all the accounts containing nudity for the presence of under-aged subjects. The ban has been criticized because Tumblr was a preferred platform for people interested in less conventional ways of experiencing sexuality, who used it to self-express and find like-minded souls.

Even though these users might ultimately find a platform and community elsewhere, the issue did bring to light yet again the ultimate powerlessness of users against the arbitrary content-restricting decisions made by the powers that be in Silicon Valley. Mark Zuckerberg has a suggestion for how to make this process more democratic and transparent: a Supreme Court for Facebook, in which various stakeholders could be involved in this decision-making process. While this seems more a thought experiment than concrete plan, the mere suggestion of farming out this crucial decision-making task illustrates how exasperated social media platforms have grown with the damned-if-you-do-damned-if-you-don’t reality of censoring online content. A dilemma that is unlikely to be resolved anytime soon.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.