United States

United States

Free
77
100
A Obstacles to Access 21 25
B Limits on Content 31 35
C Violations of User Rights 25 40
Last Year's Score & Status
78 100 Free

Overview

Internet freedom in the United States declined for the third consecutive year as immigration and law enforcement agencies expanded their surveillance of the public, eschewing robust oversight and transparency. Officials increasingly monitored social media platforms and conducted warrantless searches of travelers’ electronic devices, in some cases to glean information about constitutionally protected activities such as peaceful protests and critical reporting. While the online environment remains vibrant, diverse, and free from state censorship, disinformation was prevalent during the 2018 midterm elections and other key political events, at times exacerbated by top government officials and political leaders.

The people of the United States benefit from an open and competitive political system, a strong rule-of-law tradition, robust freedoms of expression and religious belief, and a wide array of other civil liberties. However, in recent years its democratic institutions have suffered erosion, as reflected in partisan manipulation of the electoral process, bias and dysfunction in the criminal justice system, flawed new policies on immigration and asylum seekers, and growing disparities in wealth, economic opportunity, and political influence.

Key Developments

June 1, 2018 – May 31, 2019

  • Consolidation of the telecommunications sector—including the expected merger of Sprint and T-Mobile as well as AT&T’s acquisition of Time Warner—threatened to limit consumer access to information and communication technology (ICT) services (see A4).

  • A decision by the Federal Communications Commission (FCC) to repeal the Open Internet Order went into effect in June 2018. State-level officials, technology companies, and civil society groups have since taken efforts intended to protect net neutrality in some states and nationwide (see A5 and B6).

  • Disinformation continued to permeate the online environment during sensitive political events, including the November 2018 midterm elections and congressional confirmation hearings for Supreme Court nominee Brett Kavanaugh. Domestic as opposed to foreign actors represented a growing source of misleading or false content online (see B5).

  • Law enforcement and immigration agencies increasingly monitored social media platforms and conducted warrantless searches of travelers’ electronic devices. In a number of worrisome cases during the coverage period, such monitoring targeted constitutionally protected activities such as peaceful protests and newsgathering (see C5 and C7).

  • In a ruling that was lauded by civil liberties experts, the Supreme Court held in Carpenter v. United States that the government is required to obtain a warrant in order to collect subscriber location information records from third parties like mobile service providers (see C6).

A Obstacles to Access

Access to the internet is widespread, though obstacles persist for people living in rural and low-income areas. The industry has trended toward consolidation, and most fixed-line subscribers only have one or two internet service providers (ISPs) from which to choose. Since the FCC voted in December 2017 to overturn the 2015 Open Internet Order, several states have passed their own legislation aimed at ensuring net neutrality.

A1 0-6 pts
Do infrastructural limitations restrict access to the internet or the speed and quality of internet connections? 66

The United States is one of the most connected countries in the world. However, the speed and availability of its broadband networks lag behind those of several other developed countries. According to the latest data available from the International Telecommunication Union, internet penetration in the United States stood at 76 percent at the end of 2017.1 The Pew Research Center reports that 90 percent of American adults use the internet.2 Broadband adoption rates are consistent, with home broadband use hovering around 73 percent as of February 2019.3 While broadband penetration is high by global standards, the fixed-line broadband subscription rate falls short of those in countries such as Switzerland, Denmark, France, the Netherlands, Norway, South Korea, Germany, Canada, and other member states of the Organisation for Economic Co-operation and Development (OECD).4

Uptake rates for internet-enabled mobile devices have increased dramatically throughout the United States in the past decade. In 2019, 96 percent of adults reported that they owned a mobile phone, and 81 percent of adults owned a smartphone, up from 35 percent in 2011.5

A2 0-3 pts
Is access to the internet prohibitively expensive or beyond the reach of certain segments of the population for geographical, social, or other reasons? 23

Access, cost, and usability of the internet remain barriers for some Americans, particularly senior citizens, people who live in rural areas, and those in low-income households.1 However, internet access rates for individuals aged 65 and older have steadily increased over the past decade, reaching 73 percent as of 2019, according to data from the Pew Research Center.2

The cost of broadband internet access is higher than in many European countries with similar internet penetration rates.3 In 2016, the FCC announced plans to expand its Lifeline program—which allows companies to offer subsidized phone plans to low-income households—to include broadband internet access as a subsidized utility.4 However, in 2017, the FCC issued a proposal to place restrictions on this program.5 If implemented, the proposal would limit the program to “facilities-based providers,” meaning internet resellers that do not own the network infrastructure would not be able to participate. Public interest advocates argue that the measure would significantly hamper the program’s reach and make it more difficult for low-income households to obtain affordable broadband internet access. As of November 2017, 68 percent of Lifeline recipients received service from nonfacilities providers, and in some cases there was no alternative provider in the area.6 Research from the Brookings Institution notes that the proposed policy would be especially detrimental to indigenous people living on tribal lands.7

Pew Research reported in 2019 that younger adults, people of color, and those with lower household incomes are more likely to be “smartphone dependent,” with limited options for internet access other than their phones.8

A3 0-6 pts
Does the government exercise technical or legal control over internet infrastructure for the purposes of restricting connectivity? 66

Internet users in the United States face few government-imposed restrictions on their ability to access content online. The backbone infrastructure is owned and maintained by private telecommunications companies, including AT&T and Verizon. In contrast to countries with only a few connections to the backbone internet infrastructure, the United States has numerous connection points, which would make it nearly impossible to disconnect the entire country from the internet.

At the same time, law enforcement agencies in the United States have occasionally wielded their power to inhibit wireless internet connectivity in emergency situations. The federal government has a secret protocol for shutting down wireless internet connectivity in response to particular events, some details of which came to light following a lawsuit brought under the Freedom of Information Act (FOIA) in 2013.1 The protocol, known as Standard Operating Procedure (SOP) 303, was established in 2006 on the heels of a 2005 cellular-activated train bombing in London. It codifies the “shutdown and restoration process for use by commercial and private wireless networks during national crises.” What constitutes a “national crisis” and what safeguards exist to prevent abuse remain largely unknown, as the full SOP 303 documentation has never been released to the public.2

State and local law enforcement agencies also have tools to jam wireless internet service.3 In 2014, the FCC issued an enforcement advisory clarifying that it is illegal to jam mobile networks without federal authorization, even for state and local law enforcement agencies.4

  • 1. The Electronic Privacy Information Center (EPIC) filed suit against the Department of Homeland Security (DHS) in 2013 for information about the protocol. After winning an appeal in the DC Circuit, the DHS retained exemption from disclosing SOP 303, and in July of 2015 released a redacted version of the protocol. Electronic Privacy Information Center, EPIC v. DHS – SOP 303, http://bit.ly/1GscPWS; Electronic Privacy Information Center, SOP 303 Updated Release, http://bit.ly/1WI9hZV.
  • 2. Electronic Privacy Information Center, EPIC v. DHS – SOP 303.
  • 3. Melissa Bell, “BART San Francisco Cut Cell Services to Avert Protest,” The Washington Post, August 12, 2011, http://wapo.st/1GscX8T
  • 4. Federal Communications Commission, WARNING: Jammer Use Is Prohibited, December 8, 2014, http://fcc.us/1L1RV2O.

A4 0-6 pts
Are there legal, regulatory, or economic obstacles that restrict the diversity of service providers? 46

While many broadband service providers operate in the United States, the industry has trended toward consolidation. Many consumers only have one choice when it comes to broadband providers, particularly for fixed-line service, allowing these companies to act as de facto monopolies in a given area.

In 2016, the FCC announced that it had voted to approve Charter Communications Inc.’s acquisition of Time Warner Cable and Bright House Networks; the transactions were subsequently approved by the California Public Utilities Commission.1 By the end of 2018, Charter and another cable firm, Comcast, controlled the majority of the market for fixed-line broadband internet access, with approximately 25 million and 27 million customers, respectively, out of an estimated 98 million broadband subscribers.2 AT&T was the third-largest fixed broadband provider, with 15.7 million subscribers, followed by Verizon with 7 million.3

Further consolidation of the telecommunications sector threatens to limit consumer access to ICT services and content. In June 2018, AT&T announced that it had acquired media and entertainment company Time Warner, a major content producer (not affiliated with the broadband provider Time Warner Cable).4 In July 2018, the Justice Department announced that it would appeal the court decision that had allowed the merger to proceed, arguing that it would hurt consumers.5 In February 2019, the US Court of Appeals for the District of Columbia Circuit upheld the lower court’s decision, and the Justice Department stated that it was not planning another appeal.6

The FCC has made some other attempts to address concerns about reduced competition and limited consumer access in recent merger approvals. For example, the commission included provisions within the 2016 Charter–Time Warner Cable deal that required Charter Communications to expand broadband availability to close the digital divide, including by establishing new cable lines in poorly served areas of California and providing affordable access to at least 525,000 low-income families.7 Other conditions prohibit the companies from taking steps that would privilege their cable television services over online video competitors, such as imposing data caps on online content that would discourage subscribers from streaming video.8 In 2015, regulators had blocked a proposed merger between Time Warner Cable and Comcast, citing concerns about Comcast’s ability to interfere with over-the-top services (such as Netflix) as well as increased market concentration.9

Americans increasingly access the internet via mobile technologies, as mobile service providers deploy advanced “long-term evolution” (LTE) networks. Following a decade of consolidation, the US mobile market is dominated by four national providers—AT&T, Verizon, Sprint, and T-Mobile. Verizon leads the market with 154 million subscribers, followed by AT&T with 150 million, T-Mobile with 77 million, and Sprint with 53.5 million.10

In May 2019, the FCC approved a proposed merger between Sprint and T-Mobile, and in July 2019, after the coverage period, the Justice Department granted its approval after reaching a settlement requiring Sprint to divest its prepaid mobile services to Dish Network.11 By September 2019, however, 17 state attorneys general had filed a lawsuit to block the merger.12 The US government had previously opposed further consolidation of mobile networks. Regulators had blocked AT&T’s proposed merger with T-Mobile in 2011 and separately signaled that they would block a rumored merger between Sprint and T-Mobile in 2014.13

The government has promoted mobile broadband through a series of spectrum auctions. In 2016, the FCC began the process of buying back airwaves set aside for television broadcasters to increase the available spectrum for mobile broadband, as outlined in the government’s 2012 National Broadband Plan, which set a goal of establishing universal broadband by 2020.14

In 2015, then president Barack Obama announced an initiative to encourage the development of community-based broadband services and asked the FCC to remove barriers to local investment.15 The FCC quickly preempted state laws in Tennessee and North Carolina that restricted local broadband services, arguing that such laws create barriers to broadband deployment.16 In 2016, a federal court ruled that the FCC does not have the authority to preempt such laws,17 which were also on the books in many other states. Critics contended that the ruling threatened to limit affordable broadband options for small and remote communities.

A5 0-4 pts
Do national regulatory bodies that oversee service providers and digital technology fail to operate in a free, fair, and independent manner? 34

The FCC is charged with regulating radio and television broadcasting, interstate communications, and international telecommunications that originate or terminate in the United States. It is formally an independent regulatory body, but critics on both sides of the political spectrum argue that it has become increasingly politicized in recent years. The FCC has jurisdiction over a number of internet-related issues, though this authority was curtailed when the commission voted in 2017 to reverse the 2015 Open Internet Order, which had provided the legal basis for the FCC to regulate broadband internet providers as common carriers.

The FCC is led by five commissioners who are nominated by the president and confirmed by the Senate, with no more than three commissioners from one party. President Donald Trump nominated Republican commissioner Ajit Pai to serve as chair in January 2017.1 The FCC is currently controlled by a Republican majority.

Other government agencies, such as the Commerce Department’s National Telecommunications and Information Administration (NTIA), play advisory or executive roles with respect to telecommunications, economic and technological policies, and regulations.

Since assuming his role as chair of the FCC, Pai has taken a number of steps toward deregulating the telecommunications industry, most notably the decision to diminish the FCC’s ability to regulate internet service providers and roll back a 2015 Order aimed at protecting net neutrality. In March 2017, the commission voted to freeze the broadband privacy guidelines that the FCC had passed the previous October.2 The guidelines would have required broadband providers to obtain opt-in consent from consumers before they could use and share information such as a user’s browsing history and application usage data, and would have given consumers the ability to opt out of the use and sharing of other types of personally identifiable information.3 In late March, Congress went a step further and voted to repeal the broadband privacy guidelines under the Congressional Review Act,4 which effectively prevents the FCC from enacting similar rules in the future.5 In February 2017, the FCC also ended its review of whether zero-rating practices, which provide free internet access under certain conditions, violate net neutrality principles and enabled the practice to continue.6 Critics argue that zero-rating services could harm competition.7

In December 2017, the FCC voted to reverse the 2015 Open Internet Order, often referred to as the net neutrality rules, which had reclassified service providers as a “telecommunications service.” Once reclassified, the FCC would have been able to prohibit “unreasonable discrimination,” meaning that network operators would not be allowed to give preferential treatment to favored content or block disfavored content on both fixed and mobile networks. The repeal went into effect in June 2018,8 allowing ISPs to speed up, slow down, or block some websites in favor of others at will. The repeal also overturned some of the FCC’s regulatory authority over broadband ISPs.9 Pai argued that the change would reinstate a light-touch regulatory model that is good for innovation and for consumers.10 However, the move was sharply criticized by civil society and public interest groups, which argued that it would harm consumers,11 represented an abandonment of the FCC’s responsibility to protect freedom of expression online,12 and would likely result in a less free and open internet.13 Polls indicate that a majority of Americans support net neutrality.14

Several state legislatures, attorneys general, and civil society groups have since taken up the fight to ensure net neutrality (see B6). Twenty-one state attorneys general filed a lawsuit with the US Court of Appeals for the District of Columbia Circuit, claiming that the FCC’s decision was “arbitrary and capricious” and violated several aspects of federal law. 15 Civil society groups and nonprofits—including Mozilla,16 Public Knowledge,17 the Open Technology Institute,18 and Free Press19—filed protective petitions urging the US Courts of Appeals for the First and District of Columbia Circuits to review the FCC’s decision. In October 2019, after the coverage period, the appeals court in Washington upheld the FCC’s repeal of the Open Internet Order,20 though it ruled that the FCC cannot preemptively block states from instituting their own laws intended to safeguard net neutrality.

Meanwhile, the governors of Montana and New York signed executive orders barring state agencies from conducting business with ISPs that violate net neutrality,21 and legislatures in several other states were considering bills that would require ISPs to abide by net neutrality principles.22 In September 2018, California passed its own net neutrality law. The US Justice Department announced plans to sue the state hours after the bill was signed into law,23 but the lawsuit was put on hold after the department and California officials agreed to delay enforcement.24 California’s law was expected to be suspended until the federal lawsuit challenging the FCC’s repeal of the Open Internet Order is resolved.25 The October 2019 federal appeals court ruling bolstered California’s case for enforcing its own law.26

B Limits on Content

The United States is generally free from government censorship of online content, though the passage of the Allow States and Victims to Fight Online Sex Trafficking Act, or SESTA/FOSTA, in March 2018 has had the unintended consequence of pushing companies to preemptively remove legitimate content. Disinformation continues to be prevalent online, ramping up ahead of key political events like elections, and such content is increasingly generated by domestic as opposed to foreign actors.

B1 0-6 pts
Does the state block or filter, or compel service providers to block or filter, internet content? 66

In general, the US government does not force ISPs or content hosts to block or filter online material that would be considered protected speech under international human rights law, such as political speech.

B2 0-4 pts
Do state or nonstate actors employ legal, administrative, or other means to force publishers, content hosts, or digital platforms to delete content? 34

The government does not directly censor any particular political or social viewpoints online, although legal rules do restrict certain types of content. Generally, illegal material, including child sexual abuse imagery, is subject to removal through a court order or similar legal process if it is hosted within the United States. A new law meant to combat online sex trafficking has drawn criticism for effectively encouraging companies to restrict legitimate content in their efforts to avoid penalties.

In recent years, there has been more public and political scrutiny of technology platforms’ policies regarding content removal and account restrictions. For example, in September 2019, after the coverage period, a group of senators criticized Facebook for a fact-checking review in which three doctors flagged an antiabortion group’s videos claiming that abortion is never medically necessary to save a woman’s life.1 Facebook removed the review after the senators accused the platform of bias against conservative viewpoints.

Generally, however, government pressure on ISPs or content hosts to remove content is not a widespread problem. Social media companies and other content providers may remove content and accounts that violate their terms and conditions of use.2 The most prominent example from the coverage period was an incident in which several companies—Facebook, YouTube, Apple, PayPal, Spotify, and eventually Twitter—separately decided to ban and remove content from the far-right conspiracy theorist Alex Jones, citing hate speech provisions in their terms of service.3

Government officials, including the president and members of Congress, have called on social media companies headquartered in the United States to use their technology to tackle terrorist content online, though the government has not forced companies to take any specific proactive steps.4 In May 2019, White House officials announced that the United States would not sign on to the Christchurch Call, an agreement between social media companies and numerous national governments to combat terrorist content online; the pact was forged after a gunman live-streamed his attacks on mosques in Christchurch, New Zealand.5 US officials cited concerns that the agreement would clash with the constitution’s First Amendment.6

Section 230 of the Communications Decency Act shields providers and content hosts from legal liability for some material created by users, such as standard forms of speech torts law like defamation and injurious falsehoods.7 However, exceptions to this immunity exist, including under federal criminal law, intellectual property law, and electronic communications privacy laws. Social media companies and other content providers often choose to remove content that violates their terms and conditions or their community guidelines.8 Many of the concerns regarding excessive or insufficient moderation of content on these platforms are centered on how the companies enforce their own rules (see B3).

The Allow States and Victims to Fight Online Sex Trafficking Act, also referred to as SESTA/FOSTA, was passed by Congress in March 2018 and signed into law the following month. The law established legal liability for internet services that are used to promote or facilitate the prostitution of another person. 9

While intended to address the problem of sex trafficking facilitated through the internet, the law had the unintended consequence of pushing companies to remove legitimate content. After the bill was passed by the Senate but before it became law, reports surfaced of companies preemptively censoring content: Craigslist announced that it was removing the “personals” section from its website altogether.10 Civil society activists criticized the law for motivating companies to engage in excessive censorship in order to avoid legal action.11 Sex workers and community advocates also argued that the law threatened their safety, since the affected platforms—such as Backdoor, sections of Craigslist, and other online forums—had made it possible for sex workers to leave exploitive situations and operate independently, communicate with one another, and build protective communities.12

Under Section 512 of the Digital Millennium Copyright Act (DMCA), companies have an incentive to err on the side of caution and remove any hosted content that is subject to a DMCA notice. This has led to cases in which overly broad or fraudulent DMCA claims resulted in the removal of content that would otherwise be excused under provisions for free expression, fair use, or education.13 In some instances, DMCA complaints have been exploited to take down political campaign advertisements, since their immediate removal means that they will be unavailable during the electoral period, and the claims are unlikely to be challenged in court after the campaign ends.14

Between July and December 2018, Facebook reported that it took no actions to restrict content on its platform in response to any government requests from the United States.15 For the same time period, Twitter reported that it received 83 requests from US actors but complied with none.16 The platform also removed 245 accounts for violating its terms of service. For its most recent report covering January through June 2018, Google reported that it received 1,002 requests from the United States to remove or restrict content across its different services, primarily for defamation and fraud.17 About half of these requests were for the Google search engine. Facebook and Google have not released information about content each platform removed for violating community standards or terms of service.

B3 0-4 pts
Do restrictions on the internet and digital content lack transparency, proportionality to the stated aims, or an independent appeals process? 44

The government does not place onerous restrictions on online content, and domestic laws do not allow for broad government blocking of websites or removal of content. Companies that host user-generated content, many of which are headquartered in the United States, have faced criticism in recent years for a lack of transparency and consistency when it comes to enforcing their own rules on content moderation.

One of the most significant protections for online free expression in the United States is Section 230 of the Communications Decency Act of 1934 (CDA 230), amended by the Telecommunications Act of 1996, which generally shields online sites and services from legal liability for the activities of their users, allowing user-generated content to flourish on a variety of platforms.1 However, public concerns about intellectual property violations, child sexual abuse imagery, protection of minors from harmful or indecent content, harassing or defamatory comments, publication of commercial trade secrets, illegal gambling, financial crime, and terrorist content have provided a strong impetus for legislative and executive action. Some of the resulting laws, such as SESTA/FOSTA of 2018, undermine the broad protections for intermediaries under CDA 230.2

Over the past two decades, Congress has passed several laws designed to restrict adult pornography and shield children from harmful or indecent content online, such as the Child Online Protection Act of 1998 (COPA), but these laws have generally been narrowly written or curbed by courts to avoid infringements on the constitution’s First Amendment, which protects freedom of speech and freedom of the press.

By contrast, advertisement, production, distribution, and possession of child sexual abuse images—on the internet and in all other media—are strictly prohibited under federal law and can carry a sentence of up to 30 years in prison. According to the Child Protection and Obscenity Enforcement Act of 1988, producers of sexually explicit material must keep records proving that their models and actors are over 18 years old. In addition to prosecuting individual offenders, the Justice Department, the Department of Homeland Security, and other law enforcement agencies have asserted their authority to seize the domain names of websites allegedly hosting child abuse images after obtaining a court order.3

The SAVE Act, which was intended to help prevent the sex trafficking of children, became law in 2015.4 The final text was changed to make it illegal to knowingly advertise content related to sex trafficking, a higher requirement than an earlier draft that would have established liability for “knowledge of” or “active disregard for the likelihood of” hosting such content.5 Nevertheless, the law establishes federal criminal liability for third-party content, which some civil society groups and tech experts have argued could encourage companies to err on the side of censorship rather than risk criminal penalties, or to limit the practice of monitoring content altogether so as to avoid “knowingly” promoting illegal content.6

The Children’s Internet Protection Act of 2000 (CIPA) requires public libraries that receive certain federal government subsidies to install filtering software that prevents users from accessing child sexual abuse images or other visual materials that are considered obscene or harmful to minors. Libraries that do not receive the specified subsidies from the federal government are not obliged to comply with CIPA, but more public libraries are seeking federal aid in order to mitigate budget shortfalls.7 Under the Supreme Court’s interpretation of the law, adult users can request that the filtering be removed without having to provide a justification. However, not all libraries allow this option, arguing that decisions about filtering should be left to the discretion of individual libraries.8

More recently, much attention has been focused on the seemingly arbitrary censorship conducted by content hosts. Critics from across the political spectrum have noted a lack of transparency from platforms including Facebook, Twitter, and YouTube regarding the enforcement of their respective community standards or terms of service. Some have alleged that rules against hate speech, for example, have led to the removal of comparatively mild content, even as other speech that appears more inflammatory remains accessible.9 In June 2017, the investigative journalism organization ProPublica cited an example in which Facebook censored a Black Lives Matter activist for saying that white people were generally racist, but allowed a post from a US congressman who called for the hunting and killing of “radicalized” Muslims.10 YouTube and Twitter have faced similar critiques.11

Conservatives, including President Donald Trump, have accused social media platforms of deliberately censoring conservative views, though they have offered little evidence of a consistent political bias.12 Amid this pressure, Facebook agreed to commission an audit into the issue, which has been run by a former Republican Senator and a private law firm.13 In August 2019, Facebook released an inconclusive report on the audit’s findings.14 Alongside this, the administration launched in May 2019 an online form that allows people to report instances of perceived social media censorship, though the website is no longer active.15 The White House held a “social media summit” in July to promote discussion of such bias claims, with attendees ranging from more mainstream conservative figures to online personalities who have peddled far-right conspiracy theories.16 In August 2019, after the coverage period, news outlets reported that the Trump administration was circulating a draft executive order titled “Protecting Americans from Online Censorship” that proposed tasking the FCC and the Federal Trade Commission (FTC) with investigating complaints from people who feel that social media platforms have improperly censored them.17 The draft proposal had not been formally released as of October 2019.

The greater scrutiny of content moderation has led companies to be more forthcoming about their internal policies and enforcement actions.18 During the coverage period, Facebook proposed creating its own independent oversight board that could hear user appeals of the company’s content moderation decisions.19 In September 2019, Facebook affirmed plans for the oversight board and provided further details on its charter and structure, which would consist of 40 members serving three-year terms.20 The board is expected to be in operation by early 2020.

B4 0-4 pts
Do online journalists, commentators, and ordinary users practice self-censorship? 34

There have been reports of self-censorship among journalists, lawyers, and ordinary internet users. Women and minorities are frequently the target of online harassment and abuse, which is one of the driving forces behind self-censorship (see C7). A 2017 Amnesty International survey of women across eight countries, including the United States, who had experienced online harassment found that 76 percent changed how they used social media as a result.1

Users also reportedly change their behavior in response to their awareness of extensive government surveillance. A study published in Journalism & Mass Communication Quarterly in 2016 found that priming participants with subtle reminders about mass surveillance had a chilling effect on their willingness to publicly express dissenting opinions online.2 Another study from October 2018 reaffirmed the impact of online surveillance on self-censorship.3

Studies over the past several years have concluded that aggressive leak investigations by the Justice Department—as well as expansive government surveillance programs such as those disclosed by former National Security Agency (NSA) contractor Edward Snowden in 2013—cause journalists and other writers to self-censor, and cast doubt on reporters’ ability to protect the confidentiality of their sources.4

B5 0-4 pts
Are online sources of information controlled or manipulated by the government or other powerful actors to advance a particular political interest? 24

The proliferation of disinformation—particularly on social media—remains a prominent concern. Both foreign adversaries and domestic actors regularly disseminate misleading or false information online.

Disinformation campaigns orchestrated by foreign powers continue, but they appeared to be less influential during the coverage period than in previous years. Twitter and Facebook deleted hundreds of accounts traced to Russia and Iran that had evidently been used for manipulation efforts, including during the November 2018 midterm elections.1 Similarly, during the prodemocracy protests that began in Hong Kong in March 2019 and intensified in June, Twitter removed a number of accounts that were linked to China but had been impersonating users in the United States.2 Twitter accounts believed to be connected to Russia also spread disinformation related to the congressional confirmation hearings for US Supreme Court nominee Brett Kavanaugh in September 2018.3

Disinformation campaigns related to elections are increasingly homegrown, at times emboldened by government officials and politicians.4 A study by the Oxford Internet Institute (OII) found that ahead of the 2018 midterms, domestic alternative online outlets were the main purveyors of “junk news,” a term defined broadly to encompass deliberately incorrect, deceptive, or misleading information, including content that is propagandistic, ideologically extreme, hyerpartisan, or conspiratorial.5 Disinformation around elections has come from both sides of the political spectrum. In an examination of 2.5 million Twitter posts and nearly 7,000 pages on Facebook, OII found that far-right and conservative pages spread more junk news than all other source categories combined.6 However, in an example from the left, the New York Times reported in December 2018 that in the lead-up to a 2017 special Senate election in Alabama between Democrat Doug Jones and Republican Roy Moore, a group of Democrat-aligned operatives with no connection to Jones’s campaign created a fraudulent Facebook page and using Twitter accounts intended to support Jones’s candidacy and harm Moore.7 Disinformation targeting candidates in the 2020 elections was already on the rise during the coverage period.8

Political disinformation from domestic sources abounded during other key political events from the coverage period.9 Amid the Senate’s hearings on the Kavanaugh nomination, researchers documented widespread online efforts to discredit Christine Blasey Ford, a witness who testified that Kavanaugh had sexually assaulted her when both were in high school.10 The website Right Wing News, for example, created numerous Facebook accounts and pages under different names to spread false information about Ford, including a rumor that Democrats were paying for her lawyers.11 Facebook later said it had removed Right Wing News from its platform for violating its terms of service.12 Similar disinformation that originated on gossip sites, far-right outlets, and discussion forums like 4chan found its way to prominent conservative users on Twitter, who shared it more broadly.13 Disinformation from hyperpartisan and alternative sites, along with bots, also proliferated in April 2019, after the Justice Department’s special counsel, Robert Mueller, released his full report on Russian interference in the 2016 election.14

Misleading and fraudulent political content is often propagated by President Trump himself through his official social media accounts.15 During the coverage period, he promoted a number of false conspiracy theories through original tweets or retweets from known conspiracist accounts16 that have, for example, smeared Democratic members of Congress.17

The president has also sought to limit access to information for critical journalists and ordinary citizens, and this has extended to the online sphere. In May 2018, a federal judge ruled that Trump’s practice of blocking his critics from following his Twitter account was unconstitutional, finding that the president’s Twitter feed serves as a public forum, and that preventing members of the public from interacting with the account violated the First Amendment.18 In July 2019, the US Court of Appeals for the Second Circuit upheld the decision.19

B6 0-3 pts
Are there economic or regulatory constraints that negatively affect users’ ability to publish content online? 33

There are no government-imposed economic or regulatory constraints on users’ ability to publish content. Online outlets and blogs do not need to register with or have favorable connections to the government to operate. Media sites can accept advertising from both domestic and foreign sources.

Experts have argued that the FCC’s repeal of the 2015 Open Internet Order will result in new constraints for those wishing to publish online (see A5).1 In response to the repeal, at least 30 states have introduced new bills aimed at safeguarding net neutrality, and Washington, Oregon, Vermont, California, and New Jersey have conclusively passed laws or adopted such resolutions.2 Congressional Democrats have pushed to institute net neutrality at the federal level. The draft Save the Internet Act passed the Democratic-controlled House of Representatives in April 2019, and as of October 2019 it was awaiting a vote in the Republican-controlled Senate.3

B7 0-4 pts
Does the online information landscape lack diversity? 44

The online environment in the United States continues to be vibrant and diverse, and users can easily find and publish content on a range of issues and in an array of languages. However, the growing prevalence of disinformation and hyperpartisan media over the past several years has affected the information landscape, eroding the visibility and readership of more balanced or objective sources.1 In addition, online harassment and abuse targeting women and minorities who speak out on social media platforms are a persistent threat to the diversity of information and viewpoints (see B4 and C7).

B8 0-6 pts
Do conditions impede users’ ability to mobilize, form communities, and campaign, particularly on political and social issues? 66

There are no significant restrictions on individuals’ use of digital tools for activism in the United States, which has increasingly moved online.1 Some of the most visible social movements in recent years—environmental activism, Black Lives Matter, the Women’s March, #MeToo, and the student-led campaign for gun control—have combined on-the-ground organizing with social media efforts. A study by Crimson Hexagon and the PEORIA Project at George Washington University found that on Twitter in the fall of 2017, the #MeToo hashtag was used to comment on sexual harassment and assault more than seven million times.2

C Violations of User Rights

The legal framework provides robust protections for online free expression and press freedom. However, during the coverage period, law enforcement and immigration agencies expanded their surveillance of the public—specifically on social media platforms—with limited oversight and transparency. In a positive development, the Supreme Court ruled in June 2018 that law enforcement is required to obtain a warrant to collect subscriber location information records from third parties like mobile service providers.

C1 0-6 pts
Do the constitution or other laws fail to protect rights such as freedom of expression, access to information, and press freedom, including on the internet, and are they enforced by a judiciary that lacks independence? 66

The First Amendment of the US constitution includes protections for free speech and freedom of the press. In 1997, the Supreme Court reaffirmed that online speech has the highest level of constitutional protection.1 Lower courts have consistently struck down government attempts to regulate online content, with some exceptions for illegal material such as copyright infringement or child sexual abuse images.

  • 1. Reno, Attorney General of the United States, et al. vs. American Civil Liberties Union et al, 521 U.S. 844 (1997), http://bit.ly/1OT33VQ.

C2 0-4 pts
Are there laws that assign criminal penalties or civil liability for online activities? 24

Despite significant constitutional safeguards, laws such as the Computer Fraud and Abuse Act (CFAA) of 1986 have sometimes been used to prosecute online activity and inflict harsh punishments. Certain states have criminal defamation laws in place, with penalties ranging from fines to imprisonment.1

Some examples of aggressive prosecution under the CFAA have fueled criticism of the law’s scope and application (see C3). It prohibits accessing a computer without authorization, but it fails to define the term “without authorization,” leaving the provision open to interpretation in the courts.2 In one prominent case from 2011, programmer and internet activist Aaron Swartz secretly used Massachusetts Institute of Technology servers to download millions of files from JSTOR, a service providing academic articles. Prosecutors sought harsh penalties for Swartz under the CFAA, which could have resulted in up to 35 years’ imprisonment.3 Swartz committed suicide in 2013 before he was tried. After his death, a bipartisan group of lawmakers introduced “Aaron’s Law,” a piece of legislation that would prevent the government from using the CFAA to prosecute terms-of-service violations and stop prosecutors from bringing multiple, redundant charges for a single crime.4 The bill was reintroduced in 2015, but it did not garner enough support to move forward.5 A number of states also have their own laws related to computer hacking or unauthorized access. Several smaller cases in recent years highlighted the shortcomings and lack of proportionality of these laws.6

C3 0-6 pts
Are individuals penalized for online activities? 46

Prosecutions or detentions for online activities, particularly for online speech, are relatively infrequent. However, there have been prosecutions related to threats posted on social media, arrests related to recording or live streaming of police interactions, and problematic prosecutions under the CFAA.

Arrests in recent years in relation to online reporting or online speech include the following:

  • In July 2019, Manuel Duran, a journalist who runs the local Spanish-language news website Memphis Noticias, was released on bail from Immigration and Customs Enforcement (ICE) detention.1 He was originally arrested in April 2018 while covering an immigration protest.2 Criminal charges against him were dropped, but he was detained and held by ICE. Duran has requested US asylum due to dangerous conditions for journalists in El Salvador.3 
  • In June 2018, charges against Robert Frese of Exeter, New Hampshire, were dropped. He was arrested and charged with criminal defamation for posting in the Facebook comments section of a local newspaper that the Exeter police chief had “covered up for a dirty cop.”4 The charges were dropped after the state attorney general raised First Amendment concerns.5 On behalf of Frese, the American Civil Liberties Union (ACLU) filed a federal lawsuit in December 2018 to challenge New Hampshire’s criminal defamation law.6
  • In June 2018, photojournalist Michael Nigro, on assignment with the website Truthdig, was arrested while covering a protest in Jefferson City, Missouri. Nigro stated that he was clearly wearing his press credentials at the time of the arrest.7 He was charged with failure to obey police orders. In July 2019, after the coverage period, Nigro was arrested for trespassing in New York City while covering a demonstration on climate change.8 He again said that he was wearing press credentials.

Police have periodically detained individuals who use their mobile devices to upload images or stream live video of law enforcement activity.9 Most of the arrests have been made on unrelated charges, such as obstruction or resisting arrest, since openly recording police activity is a protected right. In 2016, officers in Louisiana detained store owner Abdullah Muflahi for six hours and confiscated his mobile phone after he recorded a fatal shooting by police.10 Chris LeDay, a Georgia-based musician who shared another video of the same incident on Facebook, was arrested soon afterward for unpaid traffic fines.11 In 2017, federal courts upheld the right of bystanders to use their smartphones to record police actions.12

In April 2019, the US government charged WikiLeaks founder Julian Assange with “conspiracy to commit computer intrusion” under the CFAA.13 In May, the Justice Department brought 17 more charges against Assange, this time under the Espionage Act, for his role in publishing classified documents in 2010.14 Some press and internet freedom advocates have expressed concern about these types of charges, and the Assange case specifically,15 arguing that they could have ramifications for the legitimate work of journalists.16 As of the end of May 2019, Assange was detained in the United Kingdom and challenging a US extradition request.

C4 0-4 pts
Does the government place restrictions on anonymous communication or encryption? 34

There are no legal restrictions on user anonymity on the internet, and constitutional precedents protect the right to anonymous speech in many contexts. There are also state laws that stipulate journalists’ right to withhold the identities of anonymous sources, and at least one such law has been found to apply to bloggers.1 There are no restrictions on encryption technology, but the government has at times indicated that it might seek to undermine encryption for national security purposes.

The terms of service or other contracts enforced by some social media platforms require users to register under their real names.2 Online anonymity has been challenged in cases involving hate speech, defamation, or libel. In 2015, for example, a Virginia court tried to compel the customer-review platform Yelp to reveal the identities of anonymous users, but the Supreme Court of Virginia ruled that it did not have the authority to do so.3

Some developments suggest the government’s intent to undermine encryption. In June 2019, Politico reported that the Trump administration was considering a legal ban on any encryption technology that would not allow law enforcement access.4 The following month, Attorney General William Barr argued that “warrant-proof encryption” degrades law enforcement’s ability to detect, prevent, and investigate crimes.5 Barr went on to request in October that Facebook delay plans to encrypt messaging across its major products, citing public safety.6

Recent cases have also raised questions about the degree to which courts can force technology companies to alter their products so as to enable government access. Following a terrorist attack in San Bernardino in 2015, the federal government sought to compel Apple to unlock a passcode-protected iPhone belonging to one of the perpetrators. Because some iPhones are programmed to permanently block access to all of the phone’s encrypted data once an incorrect passcode is entered too many times, the government obtained a court order that would have compelled Apple to create new software enabling the FBI to access the phone.7 Security experts argued that requiring companies to create “backdoors” for law enforcement would undermine security and public trust.8 Apple resisted, and the case was dropped after the FBI gained access by other means.

Despite vigorous debate, however, the broader legal questions remain unresolved; there have been no legislative or policy changes regarding the use of encryption.9 There have been efforts to codify rules that would bar the government from requiring back doors for surveillance. In June 2018, a bipartisan group of lawmakers renewed an effort to pass the so-called Encrypt Act, which would prohibit state and local governments from mandating backdoor access to devices.10 The bill had originally been introduced in 2016; as of May 2019 it had not been voted on in either the House or the Senate.

The Communications Assistance for Law Enforcement Act (CALEA) currently requires telephone companies, broadband carriers, and interconnected Voice over Internet Protocol (VoIP) providers to design their systems so that communications can be easily intercepted when government agencies have the legal authority to do so, though it does not cover online communications tools such as Gmail, Skype, and Facebook.11 Calls to update CALEA to cover online applications and communications have not been successful. In 2013, 20 technical experts published a paper explaining why such an expansion (known as “CALEA II”) would create significant internet security risks.12

C5 0-6 pts
Does state surveillance of internet activities infringe on users’ right to privacy? 26

The coverage period featured an uptick in surveillance by government authorities, including monitoring of US citizens and residents engaged in constitutionally protected activity like protests and journalism, with minimal oversight and transparency.

An increasing number of government agencies are monitoring social media. The information collected through these efforts is stored in massive databases and can be shared with local, state, and federal authorities as well as multilateral government organizations and foreign states. The Brennan Center for Justice has detailed the extent to which the Department of Homeland Security (DHS)—specifically ICE, Customs and Border Protection (CBP), US Citizenship and Immigration Services (USCIS), and the Transportation Security Administration (TSA)—monitors social media.1 These programs often employ automated systems, including advanced technology obtained from private contractors such as Palantir and Giant Oak.2

In July 2019, after the coverage period, the Justice Department’s FBI released a tender to purchase new tools that “proactively” and “reactively” monitor platforms in real time.3 In May 2018, the State Department enacted a new policy that vastly expanded its collection of social media information. 4 People applying for a US visa, of whom there are about 15 million each year, are now required to provide social media details, email addresses, and phone numbers going back five years.5 The change hardened a policy under the Obama administration that had allowed applicants to voluntarily provide such information.

Local law enforcement agencies also monitor social media. While some use automated tools,6 investigators may simply create fake Facebook accounts to gain access to targets’ personal networks and follow their activity; in some cases this technique has been used against civic activists.7 In February 2019, it was revealed that the Chicago Police Department paid nearly $1.5 million to use social media surveillance software from the company Dunamai between 2014 and 2018 to monitor city residents.8 In 2016, the ACLU reported that police were conducting surveillance using a tool called Geofeedia, which allows users to aggregate social media content by location (such as a protest site); the company specifically marketed its service to law enforcement agencies.9 Following the ACLU’s report, Facebook, Twitter, and Instagram shut off Geofeedia’s access to their data.10

Peaceful protest movements and civic groups were reportedly targeted for social media surveillance during the coverage period. Documents from a FOIA request showed that ICE had pulled information from Facebook to create an “Anti-Trump Protests” spreadsheet, tracking the logistical details, aims, and sponsors of demonstrations in New York City in July and August 2018.11 Similarly, a response to a FOIA request revealed that the private company LookingGlass Cyber Solutions had compiled logistical information from Facebook on more than 600 immigration-related protests across the country in June 2018. The information was then shared with DHS and state law enforcement.12 NBC News reported in March 2019 that CBP agents had compiled dossiers, with some social media content, on 59 people, including reporters, activists, and lawyers, who were flagged for greater scrutiny at the United States’ southern border due to their work on immigration issues. CBP claimed that the list was meant to inform its investigation into an outbreak of violence at the border near Tijuana.13 In August 2019, after the coverage period, Yahoo News reported that the FBI was conducting surveillance on immigration organizations at the border, including by monitoring social media accounts.14

Warrantless searches of electronic devices at the border have escalated in recent years. During the fiscal year 2018, CBP conducted warrantless searches of 33,295 devices in border areas, a dramatic increase from only 3,500 searches in 2015. Federal authorities claim to have expansive search and seizure powers, which are traditionally limited by the constitution’s Fourth Amendment, within “border zones,” defined as up to 100 miles from any border, an area covering about 200 million residents. The 2018 Directive No. 3340-049a provides CBP with broad powers to conduct such searches at ports of entry, and requires travelers to provide their device passwords to CBP agents. A “basic search” can be conducted “with or without suspicion” of any person’s device, at any time, for any reasons, or for no reason at all, without a warrant.15 During the search, CBP is technically supposed to put any phone or internet-connected device in “airplane mode” to disable its connectivity, so that officers can only search what is physically “resident” on the device—such as pictures or texts—and not content that is only accessible through the internet.16 The directive also gives CBP the power to conduct an “advanced search,” with the use of external equipment to “review, copy, and/or analyze” the device’s contents. Advanced searches require “reasonable suspicion of activity in violation of the laws enforced or administered by CBP, or in which there is a national security concern, and with supervisory approval.” CBP has purchased technology from the Israeli company Cellebrite that allows agents to extract information stored on the device or in the Cloud within seconds.17 This information can then be stored in interagency databases that aggregate data from other monitoring programs.

In a survey of journalists who had been stopped at the border, the Committee to Protect Journalists and Reporters Without Borders found 20 cases in which border agents conducted warrantless searches of the journalists’ electronic devices.18 One reporter wrote in the Intercept that a CBP official at the US-Mexico border spent three hours searching through his iPhone, reviewing and asking questions about photos, emails, videos, calls, texts, and messages on encrypted communication apps, including conversations with colleagues and journalistic sources. The reporter alleged that another CBP agent searched through materials on his laptop, including business and personal financial spreadsheets.19

There have been a number of legislative efforts and lawsuits meant to curb warrantless searches of devices at the border. In 2017, a bipartisan group of senators introduced legislation requiring border protection agents to obtain a warrant before searching the electronic devices of US citizens or permanent residents, and forbidding them from detaining people for more than four hours while trying to persuade them to unlock their phones. The bill, which was not passed during the 2017–18 Congress, was reintroduced in May 2019.20 Civil rights groups have also challenged the searches in court,21 filing a lawsuit on behalf of 10 US citizens and one legal permanent resident. They argued that the searches in recent years have “expanded far beyond the mere enforcement of immigration and customs laws.”22

The legal framework for government surveillance has been open to abuse. Modern surveillance by law enforcement and intelligence agencies in the United States is governed in part by the USA PATRIOT Act, which was passed following the terrorist attacks of September 11, 2001, and expanded official surveillance and investigative powers.23 In 2015, President Obama signed the USA FREEDOM Act into law, extending expiring provisions of the PATRIOT Act, including broad authority for intelligence officials to obtain warrants for roving wiretaps of unnamed “John Doe” targets and surveillance of lone individuals with no evident connection to terrorist groups or foreign powers.24 At the same time, the law significantly reformed the bulk collection of domestic phone records under Section 215, a program detailed in documents leaked by Edward Snowden in 2013,25 which was ruled illegal by the US Second Circuit Court of Appeals in 2015.26 Section 2015, the roving wiretaps provision, and the lone wolf amendment will be up for reauthorization in December 2019.27

The USA FREEDOM Act replaced the domestic bulk collection program with a system that allows the NSA to access US call records held by phone companies after obtaining an order from the Foreign Intelligence Surveillance Court, or FISA Court (a body created by the 1978 Foreign Intelligence Surveillance Act).28 Requests for such access require the use of a “specific selection term” (SST) representing an “individual, account, or personal device,”29 which is intended to prohibit broad requests for records based on zip code or other indicators; access can only be extended or renewed in certain circumstances. The SST provision also applies when intelligence agents use FISA pen registers and trap-and-trace devices (instruments that capture a phone’s outgoing or incoming records) and to national security letters (secret administrative subpoenas used by the FBI to demand records).30

The USA FREEDOM Act also required that the FISA Court appoint an amicus curiae, an individual (or individuals) qualified to provide legal arguments that “advance the protection of individual privacy and civil liberties” and who may weigh in against government requests for warrants.31 Five people are currently designated to serve as amici curiae.32

Despite these improvements, various components of the legal framework still allow surveillance by intelligence agencies that lacks oversight, specificity, and transparency:

  • Section 702 of FISA Amendments Act of 2008: Section 702 was used to authorize “downstream” (also known as PRISM) and “upstream” collection (see below), the controversial foreign intelligence programs under which the NSA reportedly collects users’ communications data—including the content—directly from US technology companies and through the physical infrastructure of undersea cables, respectively.33 Section 702 only authorizes the collection of information pertaining to foreign citizens outside the United States, yet the content of Americans’ communications incidentally swept up in this process is also collected and stored in a searchable database.34 The USA FREEDOM Act made no changes to this practice or to the NSA’s access to the communications content collected. Rather, it limits the use of information about US citizens in court or in other government proceedings, but only if the NSA did not follow existing procedures to minimize the likelihood of collecting that information. The FISA Court determines whether or not those procedures were followed.35 In 2016, during the FISA Court’s annual review and reauthorization of surveillance conducted under Section 702, the government notified a FISA Court judge of widespread violations of protocols intended to limit access to Americans’ communications by NSA analysts.36 “Upstream” collection is more likely than other programs to incidentally collect communications sent between US citizens.37 The report showed that analysts had failed to take steps to ensure that they were not improperly searching the upstream database when conducting certain types of queries. In response, the court delayed reauthorizing the program, and in 2017 the NSA director recommended that the agency halt its collection of communications if they merely mentioned a surveillance target (referred to as “about” collection), and instead only collect communications to and from the target.38 Privacy advocates welcomed the NSA decision to halt this type of collection, and emphasized that the government’s findings underscored the need for legislative reform of Section 702. Section 702 was reauthorized for six years in January 2018.39 Despite robust advocacy efforts from civil liberties and privacy advocates, it was ultimately reauthorized with few changes. The renewed legislation did not address the issue of “about” collection, meaning the NSA could legally attempt to resume the practice. However, the final bill did contain a provision requiring a warrant in cases where an FBI agent wants to read the content of emails belonging to an American who is already part of an investigation; observers noted that the wording was too narrow to require a warrant in most cases.40 The final text also included some measures to increase transparency, such as requiring the NSA to notify Congress in the event that it restarts “about” collection, and requiring the attorney general to brief members of Congress about how the government uses information collected under Section 702 in official proceedings such as criminal prosecutions.41 However, privacy and civil liberties advocates warned that the reauthorization effectively codified some of the more problematic aspects of Section 702 surveillance practices.42
  • Executive Order 12333: Originally issued in 1981, Executive Order (EO) 12333 outlines how and when the NSA or other agencies may conduct surveillance on US citizens and other individuals within the United States,43 authorizing the collection of US citizens’ metadata and the content of communications if that information is collected “incidentally.”44 The extent of current NSA practices authorized under EO12333 is unclear, but documents from the 2013 NSA leaks suggest that EO12333 was used to support the so-called MYSTIC program, under which all of the incoming and outgoing phone calls of one or more target countries was captured on a rolling basis. The Intercept identified the Bahamas, Mexico, Kenya, and the Philippines as targets in 2014.45 A law passed that year included a requirement that the NSA develop “procedures for the retention of incidentally acquired communications” collected pursuant to EO12333, and that such communications may not be retained for more than five years except when subject to certain broad exceptions.46 In 2015, Obama updated a 2014 policy directive that put in place important new restrictions relevant to EO12333 on the use of information collected in bulk for foreign intelligence purposes.47 Civil society groups continue to campaign for a comprehensive reform of the executive order.48

Law enforcement access to metadata generally requires a subpoena issued by a prosecutor or investigator without judicial approval;49 a warrant is only required in California under the California Electronic Communications Privacy Act, which has been in effect since 2016.50 In criminal probes, law enforcement authorities can monitor the content of internet communications in real time only if they have obtained an order issued by a judge, under a standard that is somewhat higher than the one established under the constitution for searches of physical places. The order must reflect a finding that there is probable cause to believe that a crime has been, is being, or is about to be committed.

The status of stored communications is more uncertain. One federal appeals court has ruled that the constitution applies to stored communications, so that a judicial warrant is required for government access.51 However, the 1986 Electronic Communications Privacy Act states that the government can obtain access to email or other documents stored in the cloud with a subpoena.52 In 2016, the House of Representatives passed the Email Privacy Act, which would require the government to obtain a probable cause warrant before accessing email or other private communications stored with cloud service providers.53 The bill was reintroduced in 2017 and again passed the House, but it failed to pass the Senate during the 2017–18 Congress.54

Other legal implications of law enforcement access to devices have been debated in the courts. In 2016, a Maryland state appellate court ruled that law enforcement bodies must obtain a warrant before using “covert cell phone tracking devices” known by the product name Stingray.55 Several other court decisions subsequently affirmed that police must obtain a warrant before using these devices.56 Stingray devices mimic mobile network towers, causing nearby phones to send identifying information and thus allowing police to track targeted phones or determine the phone numbers of people in the area. In its decision, the Maryland court rejected the argument that individuals are effectively “volunteering” their private information when they choose to turn on their phones, since doing so allows third parties (the phone company’s towers) to send and receive signals from the phone.57 As of November 2018, the ACLU had identified 75 agencies across the country that use Stingray devices.58

In 2017, the Detroit News obtained court documents showing that federal agents had used Stingray devices to find and arrest an undocumented immigrant.59 Privacy advocates argue that because Stingray devices collect information from mobile phones in the area surrounding the target, and thus constitute mass surveillance, their use by law enforcement agencies should be limited to serious cases involving violent crimes, not immigration violations.60

C6 0-6 pts
Are service providers and other technology companies required to aid the government in monitoring the communications of their users? 46

There are few legal constraints on the collection, storage, and transfer of data by private or public actors in the United States. Internet service providers and content hosts collect vast amounts of information about users’ online activities, communications, and preferences. This information can be subject to government requests, typically through a subpoena, court order, or search warrant. However, companies are able to challenge or seek to narrow these types of requests.

In a positive development, in June 2018 the Supreme Court ruled in Carpenter v. United States that the government is required to obtain a warrant in order to access subscriber location information records from third parties like mobile service providers.1 Privacy advocates lauded the decision, noting that location information could have a greater impact on privacy than the other types of user data collected by private companies.2 The ruling also significantly diminished the third-party doctrine—the idea that Fourth Amendment privacy protections do not extend to most types of information that one voluntarily hands over to third parties, such as telecommunications companies.3

The United States lacks a robust federal data protection law, though a number of bills have been proposed.4 In 2017, President Trump signed SJ Resolution 34,5 which rolled back FCC privacy regulations introduced in 2016 that would have given consumers more control over how their personal information is collected and used by broadband ISPs (see A5).

To fill the void at the federal level, several states have considered or passed laws to protect internet users’ privacy rights.6 In June 2018, the California legislature enacted AB 375,7 also known as the California Consumer Privacy Act of 2018, which allows Californians to demand information from businesses in the state about how their personal data are collected, used, and shared.8 A new Vermont law implemented in February 2019 requires companies that buy or sell the personal data of state residents to register with the state government and be transparent on whether there is a way for affected users to opt out of data collection.9 In Maine, a law passed in June 2019, to be implemented in July 2020, requires ISPs to obtain consent from customers before using, selling, or distributing their data.10

The USA FREEDOM Act changed the way private companies publicly report on government requests for user information. Prior to the law, the Justice Department restricted the disclosure of information about national security letters, including in the transparency reports voluntarily published by some internet companies and service providers.11 In 2014, the department reached a settlement with Facebook, Google, LinkedIn, Microsoft, and Yahoo that permitted the companies to disclose the approximate number of government requests they receive, in aggregated bands of 250 or 1,000 rather than precise figures.12 Twitter, not a party to the settlement, sued on the grounds that the rules amounted to a prior restraint that violated the company’s First Amendment rights.13 A judge partially dismissed Twitter’s case in 2016.14 Meanwhile, the USA FREEDOM Act in 2015 granted companies the option of more granular reporting, though reports containing more detail are still subject to time delays, and their frequency is limited.15

Despite the USA FREEDOM Act’s aim of improving transparency, government requests continue to be made in secret. In September 2019, documents released in response to a FOIA request by the Electronic Frontier Foundation revealed that the FBI has been accessing personal data through national security letters from a much broader group of entities than was previously understood.16 Western Union, Bank of America, Equifax, TransUnion, the University of Alabama at Birmingham, Kansas State University, major ISPs, and tech and social media companies were all found to have received such letters.

The government may request that companies store targeted data for up to 180 days under the Stored Communications Act, but practices for general collection and storage of communications content and records vary by company.17

The scope of law enforcement access to user data held by companies was expanded under the Clarifying Lawful Overseas Use of Data Act, or CLOUD Act,18 which was signed into law in March 2018 as part of a government spending bill.19 Introduced with the intention of updating the 1986 Stored Communications Act to clarify policies governing cross-border data transfers,20 the CLOUD Act determined that law enforcement requests sent to US companies for user data under the Stored Communications Act apply to records in the company’s possession regardless of where they are stored, including overseas. Previous requests were limited to user data stored within the United States’ jurisdiction. The CLOUD Act also allows certain foreign governments to enter into an executive agreement with the US and then directly petition US companies to hand over user data.21 Proponents of the law, including several large US tech firms,22 argued that the previous legal framework was outdated and cumbersome, requiring law enforcement personnel to go through the potentially lengthy mutual legal assistance treaty (MLAT) process between countries to obtain information pertaining to local crimes because it happens to be stored overseas.23 Civil liberties advocates argued that the law further undermined user privacy.24

User information is otherwise protected under Section 5 of the Federal Trade Commission Act (FTCA), which has been interpreted to prohibit internet entities from deceiving users about what personal information is being collected and how it is being used, as well as from using personal information in ways that harm users without offering countervailing benefits. In addition, the FTCA has been interpreted to require entities that collect users’ personal information to adopt reasonable security measures to safeguard it from unauthorized access. State-level laws in 47 states and the District of Columbia also require entities that collect personal information to notify consumers—and, usually, consumer protection agencies—when they suffer a security breach leading to unauthorized access of personal information. Section 222 of the Telecommunications Act prohibits telecommunications firms from sharing or using information about their customers’ activities for other purposes without customer consent. This provision had historically only applied to phone companies’ records about phone customers, but following the FCC’s Open Internet Order, it also applied to ISPs’ records about broadband customers.25 Following the FCC’s decision to repeal the Order, some have suggested that providers may continue operating under Section 222 but without FCC guidance or enforcement.26

C7 0-5 pts
Are individuals subject to extralegal intimidation or physical violence by state authorities or any other actor in retribution for their online activities? 35

ICT users generally are not subject to extralegal intimidation or violence by state actors. However, journalists, including those working for online outlets, are at times exposed to physical violence or intimidation, particularly while covering protests. In some recent cases, journalists have been harassed by immigration authorities at the border. Women and members of minority groups are often singled out for threats and harassment by other users online.

In March 2019, Hector Amezcua, a photojournalist for the Sacramento Bee, reported that police officers pushed him to the ground while he was live-streaming a protest related to a fatal police shooting of Stephon Clark.1 In September 2018, Associated Press video journalist Josh Replogle was punched in the face by a local resident while reporting on Hurricane Florence in North Carolina.2 In an earlier example from September and October 2017, journalists, photographers, and video bloggers were subjected to pepper spray and arrest or detention by police while covering protests in St. Louis, Missouri, in response to the fatal police shooting of Freddie Gray.3

In addition to journalists, ordinary citizens who record or live-stream protests or controversial police activity with mobile devices have encountered undue harassment by authorities. Researcher Dragana Kaurin interviewed people who had used their phones to record high-profile videos of the violent arrests and police killings of African Americans—including Freddie Gray, Eric Garner, Walter Scott, Philando Castile, and Alton Sterling—in recent years. Kaurin documented numerous reports of police retaliation, harassment, physical violence, doxing, and other forms of intimidation aimed at deterring community members from sharing evidence of police brutality.4

Some journalists working for online outlets have been subjected to harassment at the border, including through warrantless searches of their electronic devices (see C5). In February 2019, David Mack, a reporter for the online news outlet BuzzFeed, said that a CBP officer aggressively questioned him about his organization’s coverage of special counsel Robert Mueller and President Trump as he passed through a New York City airport.5 CBP’s assistant commissioner for public affairs later apologized to Mack. In October 2019, after the coverage period, a CBP officer repeatedly asked Ben Watson, news editor for the online outlet Defense One, if he wrote propaganda, after Watson told the officer he was a journalist.6 Watson reported that he was not handed back his passport until he agreed to state that he wrote propaganda.

Online harassment and threats remain a persistent problem, particularly for certain groups. Female journalists, for instance, face “rampant online gendered harassment” in the course of their work, according to a study published in April 2018.7 A December 2018 Amnesty International study of abuse targeting female journalists and politicians on Twitter found that black women were 84 percent more likely to be mentioned in abusive tweets than white women.8 Journalists also face threats for writing about contentious political topics. Several journalists have reported being doxed—having their home addresses, phone numbers, and other personal details posted online—and have received threats of violence directed at themselves or their family members, causing them to think twice before writing about potentially controversial subjects.9

Such harassment also disproportionately affects ordinary users who are women or members of minority groups, undermining their rights to free speech and access to information. The Pew Research Center found in 2017 that one in four black Americans has faced online harassment because of their race or ethnicity.10 A report by Amnesty International from the same year found that 33 percent of women in the United States had experienced online abuse or harassment at least once.11

C8 0-3 pts
Are websites, governmental and private entities, service providers, or individual users subject to widespread hacking and other forms of cyberattack? 13

Cyberattacks continue to threaten the security of networks and databases in the United States.

During the coverage period, multiple city governments were subjected to ransomware attacks, in which hackers infiltrate a computer system and encrypt all of the files, demanding a ransom payment for restored access. 1 One of the largest such attacks affected Baltimore in May 2019, crippling the city’s public services and disabling thousands of government computers.

Attackers with alleged links to foreign governments continued to pursue political targets in the country. Ahead of the November 2018 midterm elections, Microsoft discovered that a unit associated with Russian military intelligence had created websites resembling those of the US Senate and prominent Republican-linked think tanks in a bid to trick visitors into revealing sensitive information and passwords.2 In January 2019, the Democratic National Committee reported that some of its email addresses were targeted in a spear-phishing campaign that was strikingly similar to those conducted in 2016 by the hacking group known as Cozy Bear, which has also been linked to Russian intelligence.3 In October 2019, after the coverage period, Microsoft reported that it discovered “significant cyber activity” coming from Phosphorus, a group believed to have links with the Iranian government.4 Within 30 days, Microsoft found more than 2,700 attempts to identify email accounts and hack 241 accounts of current and former US government officials, journalists, prominent Iranian expatriates, and one major presidential campaign.

Previously, in March 2018, the Trump administration publicly accused Russia of targeting US infrastructure in a series of cyberattacks that began in late 2015. The attacks were aimed at US and European nuclear power plants and water and electrical systems, compromising some of them, though the affected systems were not shut down.5 In 2017, a massive cyberattack dubbed “WannaCry” infected hundreds of thousands of computers and spread through networks around the world, freezing users’ files and demanding payments to unlock them.6 Though the impact in the United States was less severe than in other countries, it did affect several corporations and health care networks.7 The US government officially blamed North Korea for the attack.8

The United States has taken a series of legal and policy measures to address the growing threat of cyberattacks. In September 2018, the White House released a National Cyber Strategy for the next 15 years, which included securing critical infrastructure and partnering with the private sector.9 Critics of the strategy argued that it was “reckless,” as it called for an increase in preemptive cybersecurity operations rather than focusing on defensive measures, which they said could “escalate conflicts” and prove counterproductive.10 In 2017, President Trump had issued an executive order on “Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure,” which holds government agency heads accountable for securing the information infrastructure of their departments and promotes the sharing of resources across agencies in order to improve overall resilience.11

In 2015, President Obama signed an omnibus bill that included a version of the Cybersecurity Information Sharing Act already passed in the Senate. The law requires DHS to share information about threats with private companies, and allows companies to voluntarily disclose information to federal agencies without fear of being sued for violating user privacy (see C6).12 Civil liberties advocates said that privacy protections in the final text of the bill were not strong enough; deleted provisions from earlier drafts would have removed from the disclosures any personal information not needed to identify cybersecurity threats. Critics also said that allowing companies to voluntarily disclose data to any federal agency—including the Department of Defense and the NSA—could undermine civilian control of cybersecurity programs and blur the line between cybersecurity and law enforcement applications for the information.13

Country Facts

  • Freedom in the World Status

    Free
  • Networks Restricted

    No
  • Websites Blocked

    No
  • Pro-government Commentators

    No
  • Users Arrested

    Yes

Previous Reports