4,134 research outputs found

    Disinformation and democracy: The home front in the information war. EPC Discussion Paper, 30 January 2019

    Get PDF
    Online disinformation is deliberately false or misleading material, often masquerading as news content, which is designed to attract attention and exert influence through online channels. It may be produced to obtain advertising profit or for political purposes, and its spread is facilitated by social media and an anti-establishment current in European politics that creates a demand for alternative narratives

    Social Media Accountability for Terrorist Propaganda

    Get PDF
    Terrorist organizations have found social media websites to be invaluable for disseminating ideology, recruiting terrorists, and planning operations. National and international leaders have repeatedly pointed out the dangers terrorists pose to ordinary people and state institutions. In the United States, the federal Communications Decency Act’s § 230 provides social networking websites with immunity against civil law suits. Litigants have therefore been unsuccessful in obtaining redress against internet companies who host or disseminate third-party terrorist content. This Article demonstrates that § 230 does not bar private parties from recovery if they can prove that a social media company had received complaints about specific webpages, videos, posts, articles, IP addresses, or accounts of foreign terrorist organizations; the company’s failure to remove the material; a terrorist’s subsequent viewing of or interacting with the material on the website; and that terrorist’s acting upon the propaganda to harm the plaintiff. This Article argues that irrespective of civil immunity, the First Amendment does not limit Congress’s authority to impose criminal liability on those content intermediaries who have been notified that their websites are hosting third-party foreign terrorist incitement, recruitment, or instruction. Neither the First Amendment nor the Communications Decency Act prevents this form of federal criminal prosecution. A social media company can be prosecuted for material support of terrorism if it is knowingly providing a platform to organizations or individuals who advocate the commission of terrorist acts. Mechanisms will also need to be created that can enable administrators to take emergency measures, while simultaneously preserving the due process rights of internet intermediaries to challenge orders to immediately block, temporarily remove, or permanently destroy data

    New Media, Free Expression, and the Offences Against the State Acts

    Get PDF
    New media facilitates communication and creates a common, lived experience. It also carries the potential for great harm on an individual and societal scale. Posting integrates information and emotion, with study after study finding that fear and anger transfer most readily online. Isolation follows, with insular groups forming. The result is an increasing bifurcation of society. Scholars also write about rising levels of depression and suicide that stem from online dependence and replacing analogical experience with digital interaction, as well as escalating levels of anxiety that are rooted in the validation expectation of the ‘like’ function. These changes generate instability and contribute to a volatile social environment. Significant political risks also accompany this novel genre. Hostile actors can use social media platforms to deepen political schisms, to promote certain candidates, and, as demonstrated by the recent Cambridge Analytica debacle, to swing elections. Extremist groups and terrorist organisations can use online interactions to build sympathetic audiences and to recruit adherents. Since 1939, the Offences Against the State Act (OAS) has served as the primary vehicle for confronting political violence and challenges to state authority. How effective is it in light of new media? The challenges are legion. Terrorist recruitment is just the tip of the iceberg. Social networking sites allow for targeted and global fundraising, international direction and control, anonymous power structures, and access to expertise. These platforms create spaces within which extreme ideologies can prosper, targeting individuals likely to be sympathetic to the cause, 24 hours a day, seven days a week, ad infinitum. They offer an alternative reality, subject to factual manipulation and direction—a problem exacerbated by the risk of so-called deep fakes: autonomously-generated content that makes it appear that people acted, or that certain circumstances occurred, which never did. In November 2019 the Irish Government adopted a new regulation targeting social media. The measure focuses on political advertising and to ensure that voters have access to accurate information. It does not address the myriad further risks. This chapter, accordingly, focuses on ways in which the Offences Against the State Act (OAS) and related laws have historically treated free expression as a prelude to understanding how and whether the existing provisions are adequate for challenges from new media

    Privileging information is inevitable

    Get PDF
    Libraries, archives and museums have long collected physical materials and other artefacts. In so doing they have established formal or informal policies defining what they will (and will not) collect. We argue that these activities by their very nature privilege some information over others and that the appraisal that underlies this privileging is itself socially constructed. We do not cast this in a post-modernist or negative light, but regard a clear understanding of it as fact and its consequences as crucial to understanding what collections are and what the implications are for the digital world. We will argue that in the digital world it is much easier for users to construct their own collections from a combination of resources, some privileged and curated by information professionals and some privileged by criteria that include the frequency with which other people link to and access them. We conclude that developing these ideas is an important part of placing the concept of a digital or hybrid paper/digital library on a firm foundation and that information professionals need to learn from each other, adopting elements of a variety of different approaches to describing and exposing information. A failure to do this will serve to push information professional towards the margins of the information seekers perspective

    Algorithms and Speech

    Get PDF
    One of the central questions in free speech jurisprudence is what activities the First Amendment encompasses. This Article considers that question in the context of an area of increasing importance – algorithm-based decisions. I begin by looking to broadly accepted legal sources, which for the First Amendment means primarily Supreme Court jurisprudence. That jurisprudence provides for very broad First Amendment coverage, and the Court has reinforced that breadth in recent cases. Under the Court’s jurisprudence the First Amendment (and the heightened scrutiny it entails) would apply to many algorithm-based decisions, specifically those entailing substantive communications. We could of course adopt a limiting conception of the First Amendment, but any nonarbitrary exclusion of algorithm-based decisions would require major changes in the Court’s jurisprudence. I believe that First Amendment coverage of algorithm-based decisions is too small a step to justify such changes. But insofar as we are concerned about the expansiveness of First Amendment coverage, we may want to limit it in two areas of genuine uncertainty: editorial decisions that are neither obvious nor communicated to the reader, and laws that single out speakers but do not regulate their speech. Even with those limitations, however, an enormous and growing amount of activity will be subject to heightened scrutiny absent a fundamental reorientation of First Amendment jurisprudence
    • …
    corecore