58 research outputs found

    Rangel v. Twitter

    Get PDF

    Misinformation Mayhem: Social Media Platforms’ Efforts to Combat Medical and Political Misinformation

    Get PDF
    Social media platforms today are playing an ever-expanding role in shaping the contours of today’s information ecosystem. The events of recent months have driven home this development, as the platforms have shouldered the burden and attempted to rise to the challenge of ensuring that the public is informed – and not misinformed – about matters affecting our democratic institutions in the context of our elections, as well as about matters affecting our very health and lives in the context of the pandemic. This Article examines the extensive role recently assumed by social media platforms in the marketplace of ideas in the online sphere, with an emphasis on their efforts to combat medical misinformation in the context of the COVID-19 pandemic as well as their efforts to combat false political speech in the 2020 election cycle. In the context of medical misinformation surrounding the COVID-19 pandemic, this Article analyzes the extensive measures undertaken by the major social media platforms to combat such misinformation. In the context of misinformation in the political sphere, this Article examines the distinctive problems brought about by the microtargeting of political speech and by false political ads on social media in recent years, and the measures undertaken by major social media companies to address such problems. In both contexts, this Article examines the extent to which such measures are compatible with First Amendment substantive and procedural values. Social media platforms are essentially attempting to address today’s serious problems alone, in the absence of federal or state regulation or guidance in the United States. Despite the major problems caused by Russian interference in our 2016 elections, the U.S. has failed to enact regulations prohibiting false or misleading political advertising on social media – whether originating from foreign sources or domestic ones – because of First Amendment, legislative, and political impediments to such regulation. And the federal government has failed miserably in its efforts to combat COVID-19 or the medical misinformation that has contributed to the spread of the virus in the U.S. All of this essentially leaves us (in the United States, at least) solely in the hands, and at the mercy, of the platforms themselves, to regulate our information ecosystem (or not), as they see fit. The dire problems brought about by medical and political misinformation online in recent months and years have ushered in a sea change in the platforms’ attitudes and approaches toward regulating content online. In recent months, for example, Twitter has evolved from being the non-interventionist “free speech wing of the free speech party” to designing and operating an immense operation for regulating speech on its platform – epitomized by its recent removal and labeling of President Donald Trump’s (and Donald Trump, Jr.’s) misleading tweets. Facebook for its part has evolved from being a notorious haven for fake news in the 2016 election cycle to standing up an extensive global network of independent fact-checkers to remove and label millions of posts on its platform – including by removing a post from President Trump’s campaign account, as well as by labeling 90 million such posts in March and April 2020, involving false or misleading medical information in the context of the pandemic. Google for its part has abandoned its hands-off approach to its search algorithm results and has committed to removing false political content in the context of the 2020 election and to serving up prominent information by trusted health authorities in response to COVID-19 related searches on its platforms. These approaches undertaken by the major social media platforms are generally consistent with First Amendment values, both the substantive values in terms of what constitutes protected and unprotected speech, and the procedural values, in terms of process accorded to users whose speech is restricted or otherwise subject to action by the platforms. The platforms have removed speech that is likely to lead to imminent harm and have generally been more aggressive in responding to medical misinformation than political misinformation. This approach tracks First Amendment substantive values, which accord lesser protection for false and misleading claims regarding medical information than for false and misleading political claims. The platforms’ approaches generally adhere to First Amendment procedural values as well, including by specifying precise and narrow categories of what speech is prohibited, providing clear notice to speakers who violate their rules regarding speech, applying their rules consistently, and according an opportunity for affected speakers to appeal adverse decisions regarding their content. While the major social media platforms’ intervention in the online marketplace of ideas is not without its problems and not without its critics, this Article contends that this trend is by and large a salutary development – and one that is welcomed by the vast majority of Americans and that has brought about measurable improvements in the online information ecosystem. Recent surveys and studies show that such efforts are welcomed by Americans and are moderately effective in reducing the spread of misinformation and in improving the accuracy of beliefs of members of the public. In the absence of effective regulatory measures in the United States to combat medical and political misinformation online, social media companies should be encouraged to continue to experiment with developing and deploying even more effective measures to combat such misinformation, consistent with our First Amendment substantive and procedural values

    How Content Moderation May Expose Social Media Companies to Greater Defamation Liability

    Get PDF
    This Note will explain the critical distinction between “publishers” and “platforms,” why social media entities are currently considered “platforms,” and why the legal system should reevaluate the liability of social media entities based on how they moderate and regulate content. Part I of this Note will discuss the history of the common-law liability of content providers prior to the invention of the internet. It will also explore the history and rationale for enacting Section 230 of the Communications Decency Act (CDA). Part II of this Note will explain the distinction between “publishers” and “platforms” as it relates to defamation liability. Further, it will discuss the rapid growth of social media during the internet age and its impact on communication and the spread of information. It will also discuss the cryptic and often vague algorithmic process that social media companies use to decide which content is visible to users. Part III of this Note will analyze the current liability of social media companies as a “platform” and will discuss the argument that social media is the twenty-first century’s “town square.” Part IV will explain three key pieces of recently proposed legislation that may affect Section 230 of the CDA. Part V of this Note will explain specific changes that social media companies must make to avoid the enhanced defamation liability of moving from the “platform” category to the “publisher” category. Part VI will discuss a few legislative and executive solutions to allow Section 230 of the CDA to reflect the current internet landscape by focusing on pushing social media companies toward transparent content-moderation practices

    The Varieties of Counterspeech and Censorship on Social Media

    Get PDF
    The year 2020 was without a doubt a remarkable and unprecedented one, on many accounts and for many reasons. Among other reasons, it was a year in which the major social media platforms extensively experimented with the adoption of a variety of new tools and practices to address grave problems resulting from harmful speech on their platforms — notably, the vast amounts of misinformation associated with the COVID-19 pandemic and with the 2020 presidential election and its aftermath. By and large — consistent with First Amendment values of combatting bad speech with good speech — the platforms sought to respond to harmful online speech by resorting to different types of flagging, fact-checking, labeling, and other forms of counterspeech. Only when confronting the most egregiously harmful types of speech did the major platforms implement policies of censorship or removal — or the most extreme response of deplatforming speakers entirely. In this Article, I examine the major social media platforms’ experimentation with a variety of approaches to address the problems of political and election- related misinformation on their platforms — and the extent to which these approaches are consistent with First Amendment values. In particular, I examine what the major social media platforms have done and are doing to facilitate, develop, and enhance counterspeech mechanisms on their platforms in the context of major elections, how closely these efforts align with First Amendment values, and measures that the platforms are taking, and should be taking, to combat the problems posed by filter bubbles in the context of the microtargeting of political advertisements. This Article begins with an overview of the marketplace of ideas theory of First Amendment jurisprudence and the crucial role played by counterspeech within that theory. I then analyze the variety of types of counterspeech on social media platforms — by users and by the platforms themselves — with a special focus on the platforms’ counterspeech policies on elections, political speech, and misinformation in political/campaign speech specifically. I examine in particular the platforms’ prioritization of labeling, fact-checking, and referring users to authoritative sources over the use of censorship, removal, and deplatforming (which the platforms tend to reserve for the most harmful speech in the political sphere and which they ultimately wielded in the extraordinary context of the speech surrounding the January 2021 insurrection). I also examine the efforts that certain platforms have taken to address issues surrounding the microtargeting of political advertising, issues which are exacerbated by the filter bubbles made possible by segmentation and fractionation of audiences in social media platforms

    Wsu Board of Trustees Meeting Minutes, November 15-16, 2007

    Get PDF
    Minutes from the Wright State University Board of Trustees Meeting held on November 15-16, 2007

    Leveraging Large Language Models to Detect Influence Campaigns in Social Media

    Full text link
    Social media influence campaigns pose significant challenges to public discourse and democracy. Traditional detection methods fall short due to the complexity and dynamic nature of social media. Addressing this, we propose a novel detection method using Large Language Models (LLMs) that incorporates both user metadata and network structures. By converting these elements into a text format, our approach effectively processes multilingual content and adapts to the shifting tactics of malicious campaign actors. We validate our model through rigorous testing on multiple datasets, showcasing its superior performance in identifying influence efforts. This research not only offers a powerful tool for detecting campaigns, but also sets the stage for future enhancements to keep up with the fast-paced evolution of social media-based influence tactics

    Minding the Gap: Why or How Nova Scotia Should Enact a New Cyber-Safety Act - Case Comment on Crouch v. Snell

    Get PDF
    Nova Scotia’s Cyber-safety Act was meant to fill a gap in the law. Where criminal charges and civil claims like defamation were unavailable or undesirable, the Act, it was hoped, would contain a substantive definition of cyberbullying, set out when it was actionable, and provide procedures for victims to obtain remedies. But the statute that was ultimately passed was too blunt a tool to address the problem, from both a substantive and a procedural perspective. That helps explain why Justice McDougall of the Supreme Court of Nova Scotia struck down the entire statute as unconstitutional, in the recent case of Crouch v. Snell. Now that the Cyber-safety Act is no more, the gap is back. Since the statute was enacted, in 2013, there have been amendments to the Criminal Code and developments in tort law that arguably temper the need for a revised statute. So is there still a gap that needs filling? This case comment suggests that there is, in light of the continued prevalence of harmful online speech — but only if it is filled properly. In filling the gap the second time around, the Legislature should take some cues from Justice McDougall’s decision which, though not perfect, lays the groundwork for what reasonable limits on the substantive definition of ‘‘cyberbullying,” and reasonable tweaks to the process, should look like
    • 

    corecore