37,490 research outputs found
Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign
Until recently, social media was seen to promote democratic discourse on
social and political issues. However, this powerful communication platform has
come under scrutiny for allowing hostile actors to exploit online discussions
in an attempt to manipulate public opinion. A case in point is the ongoing U.S.
Congress' investigation of Russian interference in the 2016 U.S. election
campaign, with Russia accused of using trolls (malicious accounts created to
manipulate) and bots to spread misinformation and politically biased
information. In this study, we explore the effects of this manipulation
campaign, taking a closer look at users who re-shared the posts produced on
Twitter by the Russian troll accounts publicly disclosed by U.S. Congress
investigation. We collected a dataset with over 43 million election-related
posts shared on Twitter between September 16 and October 21, 2016, by about 5.7
million distinct users. This dataset included accounts associated with the
identified Russian trolls. We use label propagation to infer the ideology of
all users based on the news sources they shared. This method enables us to
classify a large number of users as liberal or conservative with precision and
recall above 90%. Conservatives retweeted Russian trolls about 31 times more
often than liberals and produced 36x more tweets. Additionally, most retweets
of troll content originated from two Southern states: Tennessee and Texas.
Using state-of-the-art bot detection techniques, we estimated that about 4.9%
and 6.2% of liberal and conservative users respectively were bots. Text
analysis on the content shared by trolls reveals that they had a mostly
conservative, pro-Trump agenda. Although an ideologically broad swath of
Twitter users was exposed to Russian Trolls in the period leading up to the
2016 U.S. Presidential election, it was mainly conservatives who helped amplify
their message
Social Media Accountability for Terrorist Propaganda
Terrorist organizations have found social media websites to be invaluable for disseminating ideology, recruiting terrorists, and planning operations. National and international leaders have repeatedly pointed out the dangers terrorists pose to ordinary people and state institutions. In the United States, the federal Communications Decency Act’s § 230 provides social networking websites with immunity against civil law suits. Litigants have therefore been unsuccessful in obtaining redress against internet companies who host or disseminate third-party terrorist content. This Article demonstrates that § 230 does not bar private parties from recovery if they can prove that a social media company had received complaints about specific webpages, videos, posts, articles, IP addresses, or accounts of foreign terrorist organizations; the company’s failure to remove the material; a terrorist’s subsequent viewing of or interacting with the material on the website; and that terrorist’s acting upon the propaganda to harm the plaintiff. This Article argues that irrespective of civil immunity, the First Amendment does not limit Congress’s authority to impose criminal liability on those content intermediaries who have been notified that their websites are hosting third-party foreign terrorist incitement, recruitment, or instruction. Neither the First Amendment nor the Communications Decency Act prevents this form of federal criminal prosecution. A social media company can be prosecuted for material support of terrorism if it is knowingly providing a platform to organizations or individuals who advocate the commission of terrorist acts. Mechanisms will also need to be created that can enable administrators to take emergency measures, while simultaneously preserving the due process rights of internet intermediaries to challenge orders to immediately block, temporarily remove, or permanently destroy data
Increasing Copyright Protection for Social Media Users by Expanding Social Media Platforms\u27 Rights
Social media platforms allow users to share their creative works with the world. Users take great advantage of this functionality, as Facebook, Instagram, Flickr, Snapchat, and WhatsApp users alone uploaded 1.8 billion photos per day in 2014. Under the terms of service and terms of use agreements of most U.S. based social media platforms, users retain ownership of this content, since they only grant social media platforms nonexclusive licenses to their content. While nonexclusive licenses protect users vis-à -vis the social media platforms, these licenses preclude social media platforms from bringing copyright infringement claims on behalf of their users against infringers of user content under the Copyright Act of 1976. Since the average cost of litigating a copyright infringement case might be as high as two million dollars, the average social media user cannot protect his or her content against copyright infringers. To remedy this issue, Congress should amend 17 U.S.C. § 501 to allow social media platforms to bring copyright infringement claims against those who infringe their users’ content. Through this amendment, Congress would create a new protection for social media users while ensuring that users retain ownership over the content they create
Online Terrorist Speech, Direct Government Regulation, and the Communications Decency Act
The Communications Decency Act (CDA) provides Internet platforms complete liability protection from user-generated content. This Article discusses the costs of this current legal framework and several potential solutions. It proposes three modifications to the CDA that would use a carrot and stick to incentivize companies to take a more active role in addressing some of the most blatant downsides of user-generated content on the Internet. Despite the modest nature of these proposed changes, they would have a significant impact
Public Fora Purpose: Analyzing Viewpoint Discrimination on the President’s Twitter Account
Today, protectable speech takes many forms in many spaces. This Note is about the spaces. This Note discusses whether President Donald J. Trump’s personal Twitter account functions as a public forum, and if so, whether blocking constituents from said account amounts to viewpoint discrimination—a First Amendment freedom of speech violation. Part I introduces the core legal devices and doctrines that have developed in freedom of speech jurisprudence relating to issues of public fora. Part II analyzes whether social media generally serves as public fora, whether the President’s personal Twitter account is a public forum, and whether his recent habit of blocking constituents from that account amounts to viewpoint discrimination. In doing so, Part II also addresses the applicability of the recent decision from the U.S. District Court for the Eastern District of Virginia, Davison v. Loudoun County Board of Supervisors—wherein a local county government official was held to have engaged in viewpoint discrimination for banning a constituent from her personal social media account—to the Knight First Amendment Institute at Columbia University’s pending case against the President for the same. Part III then suggests multiple approaches for courts to analyze these claims, while taking account of an analytical mismatch that occurs when trying to apply the Davison case to the case brought against the President
The Ebb and Flow of Controversial Debates on Social Media
We explore how the polarization around controversial topics evolves on
Twitter - over a long period of time (2011 to 2016), and also as a response to
major external events that lead to increased related activity. We find that
increased activity is typically associated with increased polarization;
however, we find no consistent long-term trend in polarization over time among
the topics we study.Comment: Accepted as a short paper at ICWSM 2017. Please cite the ICWSM
version and not the ArXiv versio
- …