371 research outputs found

    Time

    Get PDF

    Technology and Pornography

    Get PDF
    Over the past decade, legislators and industry players have attempted to employ technology to restrict the availability to minors of sexually-themed Internet content. Legislative efforts have relied on adult verification and software filtering technology. The constitutionality of such schemes generally depends on the level of sophistication, efficacy, and deployment of adult verification technology, the burdens that the required use of such technology imposes on content providers and Internet end users, and availability of less restrictive but equally effective alternatives for achieving the government\u27s interest. In the case of both the CDA and COPA, challengers pointed to the less restrictive alternative of software filters in convincing the Court to strike down these statutes as constitutionally infirm. Recently, an organization called CP80 has proposed legislation (Internet Community Ports Act) that would require that all Internet content be classified by content providers into one of two categories - Adult/Inappropriate for Minors or Appropriate for Minors. This proposed legislation relies on port-filtering technology to restrict minors\u27 access to the former category of content. Under this proposed scheme, certain Internet ports would be designated as Adult Ports to transmit adult content while others would be designated as Community Ports to be used for all other content. Individual users would then direct their ISPs to provide content to them on all ports or only on Community Ports. In this Article, I scrutinize these attempts to use technology to remedy the problem of minors\u27 access to harmful Internet content, focusing on the relationship between the efficacy of the technology and the constitutionality of the legislation at issue. The more effective software filtering becomes in restricting minors\u27 access to harmful content, the less likely the courts will uphold other legislative means. I then analyze the foundational First Amendment jurisprudence regarding the regulation of minors\u27 access to sexually-themed content. Next, I examine the fate of Congress\u27s recent efforts to regulate in this area, with particular emphasis on the current status of COPA. Finally, I analyze the constitutionality of the proposed Internet Community Ports Act in light of the scrutiny courts have imposed upon prior legislative efforts and the burdens the Act would impose on content providers and Internet users

    The Fourth Year of Forgetting: The Troubling Expansion of the Right to Be Forgotten

    Get PDF
    In its famous right to be forgotten decision, the Court of Justice of the European Union ruled in 2014 that search engine operators must, upon request from a data subject, remove links that result from searches for an individual’s name when those results are “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes… carried out by the operator of the search engine.” The initial implementation of the right to be forgotten was limited in several ways. First, it was limited in geographical scope to European domains of search engines. Google—the primary search engine affected by the decision—limited delisting to its European domains (such as Google.es and Google.de) and refrained from implementing such delisting within its global Google.com search engine. While Google has consistently sought to limit the geographical reach of the right to be forgotten decision, European data regulators have insisted upon its global implementation. Second, the implementation of the right to be forgotten was limited to search engines and only imposed delisting requirements on the search engines; it did not extend to the underlying content at issue, such as newspaper archives or other online content. As such, the right to be forgotten decision mandated only indirect—not direct—censorship of the content to be forgotten. Recently, however, European courts have expanded the scope of the right to be forgotten (and related privacy rights) to mandate how newspapers and other Internet content providers make available content on the Internet, in some instances requiring erasure or anonymization of such content. These expansions of the right to be forgotten have posed greater impositions on freedom of expression, including on the rights of United States citizens and members of the press to access information on the Internet regarding U.S. court decisions. In addition, the European Union’s General Data Protection Regulation—which went into effect in May 2018—imposes even greater infringements on the right to freedom of expression and does not accord the fundamental due process rights of notice or the opportunity to be heard to affected speakers and publishers. Furthermore, the right to be forgotten is expanding beyond Europe -- to countries such as India, Russia, Mexico, Japan, and Colombia -- and these countries are imposing expansive obligations on search engines and Internet content providers to censor information on the Internet. While the right to be forgotten began as a right that was limited in scope—and had a limited effect on the free flow of information on the Internet—in the past four years it has rapidly expanded into a formidable global threat to freedom of expression. While the right to be forgotten began as a right that was limited in scope—and had a limited effect on the free flow of information on the Internet—in the past four years it has rapidly expanded into a formidable global threat to freedom of expression

    Net Neutrality, Free Speech, and Democracy in the Internet Age

    Get PDF
    Professor Nunziato\u27s book explains why the growth of the Internet as the most open forum for free expression in history is now threatened by the privatization of the Internet, the gatekeeper control over expression exercised by a handful of corporate owners, and their power to censor what we say and read online. She sets forth how we got to this place and what must be done about it to guarantee meaningful free speech rights in the Internet age

    First Amendment Protections for Good Trouble

    Get PDF
    In the classical era of the Civil Rights Movement in the 1950s, 1960s, and 1970s, activists and protestors sought to march, demonstrate, stage sit-ins, speak up, and denounce the system of racial oppression in our country. This was met not just by counterspeech—the preferred response within our constitutional framework—but also by efforts by the dominant power structure to censor and shut down those forms of public rebuke of our nation’s racist practices. Fast forward seventy years, and the tactics of the dominant power structure have essentially remained the same in response to today’s civil rights activists who seek to protest police brutality, other forms of oppression, and disregard of Black lives, and who seek to educate the public about our nation’s legacy and practice of systemic racism. Today’s civil rights activists have been met not just with counterspeech but with efforts to silence them—for example, by the anti-protest statutes enacted in many states, by efforts to financially cripple protest movements through the novel theory of “negligent protest” liability, and by so-called anti-Critical Race Theory laws that originated in a Trump-era Executive Order and that have now been enacted in many states, which muzzle the teaching of concepts of systemic racism in our public education systems—including at the college level. Fortunately, the successes of the classical era of the Civil Rights Movement were not limited to addressing racial discrimination and segregation: they also brought about powerful changes in First Amendment doctrines and ushered in the development of powerful doctrinal tools that can now be wielded by modern-day civil rights activists to defeat these modern-day efforts to silence messages of antiracism. These doctrines include the prior restraint doctrine, the vagueness and overbreadth doctrines, the public forum doctrine, the expressive conduct doctrine, the right to associate (including anonymously and without incurring liability for protest-related harms), and the right to fairly criticize public officials (without fear of defamation liability). In the context of modern civil rights and social protest movements, such First Amendment doctrines can and should serve as powerful weapons to defeat present-day attempts to inhibit the ongoing quest for racial equality

    Intergenerational Justice Between Authors in the Digital Age

    Get PDF
    Authors\u27 copyright rights have traditionally been limited, because such limitations were believed to be necessary to advance copyright law\u27s constitutionally-mandated utilitarian purpose - to promote the progress of science and the useful arts. But authors today - especially authors of digital works - are increasingly turning to extra-copyright measures, including encryption and clickwrap licenses, to customize their rights in their works of authorship. Because such privately-ordered rights are arguably outside of copyright law\u27s framework, they are not necessarily subject to its utilitarian mandate, and need not be made subject to limitations imposed by copyright law on authors\u27 rights. Even though such private ordering regimes may not be subject to copyright law\u27s utilitarian mandate justifying limitations on authors\u27 rights, other powerful justifications implicit in the copyright regime support the imposition of limitations on authors\u27 rights. This article advances one such theoretical justification. Building upon the foundational work of John Rawls, who has articulated a theory of justice as fairness, the article develops a theory of justice between generations of authors. This theory requires that the rights of each generation of authors - including the rights that they might attempt to assert through private ordering measures - be limited for the benefit of subsequent generations of authors. While not limited to authors\u27 rights in digital works, the theory of intergenerational justice between authors the article advances is particularly relevant to the burgeoning private ordering regime, in which authors of digital works are increasingly using private ordering measures to create for themselves virtually unlimited rights - in disregard of the interests of future generations of authors - even while they are benefiting from the limitations copyright law has imposed on the rights of their predecessor generation of authors. Authors of electronic books, for example, are increasingly using clickwrap licenses and encryption controls to prohibit their readers from copying any portion of their books, even while these authors have benefited from incorporating elements of earlier works into their own works. Motion picture companies are increasingly using technological measures like encryption devices to control access to and copying of their films released on DVD, even while the filmmakers have benefited from copying elements of earlier works in developing their films. By the use of such private ordering measures, present-day authors are able to reap the benefits of the limitations on authors\u27 rights previously imposed by copyright law, while casting aside any limitations on their rights for the benefit of future authors. The article contends that the use of such private ordering measures to establish unlimited rights in creative works is inconsistent with intergenerational justice obligations imposed upon authors to preserve the raw materials of the creative process for the benefit of future generations of authors

    Toward a Constitutional Regulation of Minors\u27 Access to Harmful Internet Speech

    Get PDF
    In this Article, Prof. Nunziato scrutinizes Congress\u27s recent efforts to regulate access to sexually-themed Internet speech. The first such effort, embodied in the Communications Decency Act, failed to take into account the Supreme Court\u27s carefully-honed obscenity and obscenity-for-minors jurisprudence. The second, embodied in the Child Online Protection Act, attended carefully to Supreme Court precedent, but failed to account for the geographic variability in definitions of obscene speech. Finally, the recently-enacted Children\u27s Internet Protection Act apparently remedies the constitutional deficiencies identified in these two prior legislative efforts, but runs the risk of being implemented in a manner that fails to protect either adults\u27 or minors\u27 right to access protected expression. Although CIPA recently withstood a facial attack on its constitutionality, it is likely that this statute will confront as-applied challenges. Prof. Nunziato analyzes the technology and the First Amendment doctrines at issue in CIPA\u27s implementation, and sets forth recommendations as to how libraries can implement CIPA in a manner that protects both adults\u27 and minors\u27 free speech rights

    The Marketplace of Ideas Online

    Get PDF
    One hundred years ago, in the 1919 case of Abrams v. United States, Justice Oliver Wendell Holmes, Jr. ushered into existence modern First Amendment jurisprudence by introducing the free trade in ideas model of free speech. According to this model, the ultimate good is reached by allowing speakers to engage in the free trade in ideas—free of government intervention in the way of regulation, censorship, or punishment. Ideas must be allowed to compete freely in an unregulated market, and the best ideas will ultimately get accepted by competing with others in this marketplace. As such, government intervention is unnecessary and counterproductive. Thus, instead of punishing the speakers in Abrams—for criticizing the government’s attempts to crush the Russian Revolution and calling for American workers to strike—the government should have taken a hands-off approach and allowed these ideas to compete (and lose) in the marketplace of ideas. The characteristics of our marketplace(s) of ideas have changed dramatically since 1919, when the Russian immigrants in Abrams threw their leaflets from the fourth floor window of a hat factory in lower Manhattan in an effort to widely disseminate their ideas. Russians are still players in our marketplace of ideas, but today’s marketplace suffers from uniquely modern and challenging problems—such as rampant interference in the form of Russian troll farms mass producing tweets and other widely shared content on social media with the intent and the effect of sabotaging U.S. elections. In addition to the widespread dissemination of false political content from both foreign and domestic sources, today’s online marketplace of ideas is besieged by the increased polarization and siloing of thought and opinion, which renders Holmes’s prescribed remedy for harmful speech—counterspeech—increasingly ineffective. In the past two years, we have seen a variety of efforts, both in the United States and across the globe, by governments and by online platform providers themselves, to address the problems, distortions, and imperfections in the online marketplace. Because online platforms like Facebook and Twitter play such a dominant role in the online marketplace of ideas—and the modern marketplace of ideas generally—it is worthwhile to focus specifically on how these platforms are being regulated, as well as how they are regulating themselves. While the United States has essentially taken a hands-off approach to regulating online platforms, the European Union has assumed a relatively aggressive regulatory approach. The EU, as well as several European countries, have generally implemented speech regulations to hold platforms liable for failing to police their sites, and have recently imposed sweeping regulations on such platforms. And, in their efforts to comply with such regulations, online platforms like Facebook and Twitter may end up implementing these European regulations in ways that affect what U.S. audiences can access online—since it is often difficult for platforms to implement national regulations in a geographically targeted manner with no spillover beyond the regulating nation’s borders. Accordingly, it is worthwhile to examine these international efforts in some detail. The EU and European countries have recently undertaken sweeping efforts to remedy perceived imperfections in the marketplace, including by requiring online platforms to rapidly remove a wide swath of harmful content. Among European nations, Germany has led the way by enacting drastic legislation requiring social media sites like Facebook and Twitter to remove false news, defamatory hate speech, and other unlawful content within twenty four hours of receiving notice of the same, upon pain of multi-million Euro fines.Other European countries are considering following suit. In addition to government regulation by the EU and by European governments, the online platforms themselves are undertaking self-regulatory measures with respect to content accessible by U.S. audiences (partly in an effort to forestall U.S. government regulation). Although such self-regulatory efforts are not governed by the First Amendment, they are nonetheless inspired by First Amendment values. The leading social media companies have adopted several measures to attempt to address problems in the online marketplace of ideas, including by enabling the flagging of false news for verification by independent third party fact-checkers, commissioning the development of counter-speech in response to false news, providing contextual information about purveyors of news-related posts, and removing fake sites and purveyors of false news from their platforms. Although the United States has largely taken a hands-off approach to regulating online platforms, in the wake of the severe problems besieging the platforms in the context of the 2016 presidential elections and thereafter, U.S. legislators have recently sought to hold the online platforms responsible for such problems. In addition to extensive legislative hearings during which legislators have sought to hold the companies to account for such problems, legislators have recently proposed new laws to attempt to remedy such problems. In particular, Congress recently proposed the Honest Ads Act in an effort to limit foreign interference in the online marketplace of ideas and to mandate the disclosure of information regarding the source of political advertisements on social media. Finally, in the United States, victims and targets of some of the problems besieging the online marketplace of ideas—including false news, conspiracy theories, and hoaxes—are increasingly turning to defamation law in an effort to hold the purveyors to account for the harms resulting from such online content. This Article surveys the severe problems in today’s online marketplace of ideas and the efforts that regulators—and the online platforms themselves—have recently adopted in an attempt to address such problems. In Part II, this Article examines the historical foundations of the marketplace of ideas model, as articulated in Justice Holmes’s early opinions, as well as the Court’s eventual adoption of the marketplace model and with it, the adoption of counterspeech, instead of censorship, as the default response to harmful speech. Part III then examines the scope and extent of the problems besieging the modern online marketplace of ideas, focusing on problems that have arisen especially in the context of the 2016 U.S. presidential election and thereafter on social media platforms like Facebook and Twitter. In Part IV.A, this Article examines the sweeping regulatory efforts recently adopted by the EU and by Germany in particular, and the ways in which the online platforms are striving to implement such regulations. In Parts IV.B and IV.C, the Article turns to an analysis of the self-regulatory efforts undertaken by leading social media platforms Facebook and Twitter, the likely efficacy of such measures in addressing the problems besieging the online marketplace of ideas, and the extent to which such measures are consistent with First Amendment values. In Part IV.D, the Article examines the constitutionality and the likely efficacy of the recently proposed Honest Ads Act. In Part IV.E, the Article examines the extent to which the defamation lawsuits brought by victims of false news, conspiracy theories, and online hoaxes are consistent with the First Amendment. A brief conclusion follows
    • …
    corecore