70,725 research outputs found

    Where\u27s the Beef? How Chicago Gang Members Utilize Social Media to Promote Beefs and Incite Gang Violence and Gang Murders

    Get PDF
    This is a literature review about how gang members utilize social media to incite gang violence offline to assist in the creation of a new special unit of the Chicago Police Department that will work together with gang violence prevention/gang violence reduction organizations to assist in reducing gang conflict/gang violence created on social media. There are studies that focus on how gang members use social media. The literature focuses on different aspects of how gang members and gangs use social media. “Internet banging” or “cyber banging” is a form of gang banging but on social media platforms. “Internet banging” or “cyber banging” is used to threat, provoke, and taunt rival gang members and gangs. “Internet banging” or “cyber banging” can incite violent reactions when gang members are responding and replying to each other on the internet. This can also lead to violence on the street. This literature review will describe and explain how gang members utilize social media to incite gang violence and gang murders offline and how can we use social media and other alternatives to prevent gang violence. In my research I have conducted a literature review of case studies regarding “internet banging” or “cyber banging”, how gangs and gang members use social media to “gang bang”, how has social media changed the way gangs interact or conduct gang actions, and how the Chicago Police Department polices social media

    Written Evidence Submitted to the House of Commons-Digital, Culture, Media and Sport Select Committee’s Inquiry on Fake News

    Get PDF
    Executive Summary This submission provides evidence on four aspects: 1. What do we know about fake news, fake profiles/accounts, and fake attention on social media? 2. What are the causes of fake news, political bots and fake social media accounts? 3. What are the problems and impacts of fake news, political bots and fake accounts? 4. What can be done against fake news culture? This submission gives special attention to the role of online advertising in fake news culture. (§§1.1-1.16) Reports, research and analyses suggest that fake news, automated social media bots that post content online and fake online attention, as well as fake profiles play a significant role in political communication on social media. (§§2.1-2.6) The proliferation of fake news, political bots and fake accounts on social media has interacting economic, political and ideological causes. There are no technological fixes to the problems associated with fake news. (§§3.1-3.5) There are a number of potential problems associated with fake news, political bots and fake accounts that can limit and negatively impact the public sphere: the undermining of human communication’s validity claims; threats to democracy; one-dimensional, instrumental, highly polarised and symbolically/communicatively violent politics; spirals of intensifying political aggression and violence. (§§4.1-4.12) There is a number of feasible measures that can be taken in order to challenge fake news culture. These include: Outlawing targeted and behavioural political online advertising, the substitution of algorithmic activity by paid human work of fact-checkers and knowledge professionals, the legal requirement to introduce a fake news alert button, providing support to new types of online platforms and new formats that decelerate news and political communication and act as slow media, the advancement of and support for public service Internet platforms, giving the BBC an important role in advancing public service Internet platforms that foster advertising-free political debate that challenges fake news, the introduction of an online advertising tax on all ads targeted at users accessing the Internet in the UK in order to provide a resource base for funding public service and alternative Internet platforms that foster a new culture of political debate

    Children's rights in the digital age: a download from children around the world

    Get PDF
    Evidence from across the world is telling us that no matter where they are from, more and more children are relying on digital tools, platforms and services to learn, engage, participate, play, innovate, work or socialise. Foreward Some two-thirds of the world’s almost three billion internet users are from the developing world, with the numbers growing every day. Many of these new users are children and young people; in fact in many countries, internet users under the age of 24 far outnumber the rest. A growing body of evidence from across the world is also telling us that no matter where they are from, more and more children are relying on digital tools, platforms and services to learn, engage, participate, play, innovate, work or socialise. There are already countless examples of how – when harnessed appropriately – digital tools can help promote human development, by closing gaps in access to information, speeding up service delivery, supporting educational and health outcomes, and creating new entrepreneurship opportunities. The power of technology to jump across borders and time zones, to join the once disparate, and to foster social connectedness, has provided the means for the children and young people of today to participate in a global society in ways previously not possible. Sadly, there are also new or evolving risks – exposure to violence; access to inappropriate content, goods and services; concerns about excessive use; and issues of data protection and privacy. As it becomes increasingly difficult to draw the line between offline and online, it is necessary for us to examine how this changing environment impacts the wellbeing and development of children and their rights. Ensuring that all children are safe online requires approaches that promote digital literacy, resilience and cyber-savvy. It is only in partnership that we can reach consensus on how to create a safe, open, accessible, affordable and secure digital world. Critically, children and young people’s profound insight must help inform, shape and drive this goal – which needs to focus on equity of access, safety for all, digital literacy across generations, identity and privacy, participation and civic engagement. In April of this year, the Berkman Center for Internet & Society at Harvard University and UNICEF co-hosted, in collaboration with PEW Internet, EU Kids Online, the Internet Society (ISOC), Family Online Safety Institute (FOSI), and YouthPolicy.org, a first of its kind international ‘Digitally Connected’ symposium on children, youth, and digital media. The symposium sought to map and explore the global state of research and practice in this field, and to facilitate sharing, discussion and collaboration among the 150 academics, practitioners, young people, activists, philanthropists, government officials, and representatives of technology companies from around the world.   &nbsp

    Social Media Accountability for Terrorist Propaganda

    Get PDF
    Terrorist organizations have found social media websites to be invaluable for disseminating ideology, recruiting terrorists, and planning operations. National and international leaders have repeatedly pointed out the dangers terrorists pose to ordinary people and state institutions. In the United States, the federal Communications Decency Act’s § 230 provides social networking websites with immunity against civil law suits. Litigants have therefore been unsuccessful in obtaining redress against internet companies who host or disseminate third-party terrorist content. This Article demonstrates that § 230 does not bar private parties from recovery if they can prove that a social media company had received complaints about specific webpages, videos, posts, articles, IP addresses, or accounts of foreign terrorist organizations; the company’s failure to remove the material; a terrorist’s subsequent viewing of or interacting with the material on the website; and that terrorist’s acting upon the propaganda to harm the plaintiff. This Article argues that irrespective of civil immunity, the First Amendment does not limit Congress’s authority to impose criminal liability on those content intermediaries who have been notified that their websites are hosting third-party foreign terrorist incitement, recruitment, or instruction. Neither the First Amendment nor the Communications Decency Act prevents this form of federal criminal prosecution. A social media company can be prosecuted for material support of terrorism if it is knowingly providing a platform to organizations or individuals who advocate the commission of terrorist acts. Mechanisms will also need to be created that can enable administrators to take emergency measures, while simultaneously preserving the due process rights of internet intermediaries to challenge orders to immediately block, temporarily remove, or permanently destroy data

    Online Terrorist Speech, Direct Government Regulation, and the Communications Decency Act

    Get PDF
    The Communications Decency Act (CDA) provides Internet platforms complete liability protection from user-generated content. This Article discusses the costs of this current legal framework and several potential solutions. It proposes three modifications to the CDA that would use a carrot and stick to incentivize companies to take a more active role in addressing some of the most blatant downsides of user-generated content on the Internet. Despite the modest nature of these proposed changes, they would have a significant impact

    Blog symposium 'Strasbourg Observers turns ten' (2) : the Court’s subtle approach of online media platforms’ liability for user-generated content since the ‘Delfi Oracle’

    Get PDF
    This blog post, nearly five years after the final Delfi judgment (ECtHR Grand Chamber 16 June 2015), focusses on the impact of the Delfi case and gives a short overview of the further developments in the Court’s case law determining the scope of liability of internet platforms or other online intermediaries for user-generated content. Finally we refer to the initiative by the Committee of Ministers of the Council of Europe recommending the member states to respect and apply a set of guidelines when implementing the legislative frameworks relating to internet intermediaries, including some principles guaranteeing users’ rights to freedom of expression in the online environment

    The roles of “old” and “new” media tools and technologies in the facilitation of violent extremism and terrorism

    Get PDF
    This chapter describes and discusses the roles of media tools and technologies in the facilitation of violent extremism and terrorism. Rather than focusing on how media report on terrorism, we investigate how extremist and terrorist groups and movements themselves have exploited various “traditional” and “new” media tools, from print to digital, outlining the significance that they have had on extremists’ ability to mark territory, intimidate some audiences, connect with other (sympathetic) audiences, radicalize, and even recruit. Underlined is that violent extremists and terrorists of all stripes have, over time, used every means at their disposal to forward their communicative goals. Also worth noting is that ‘old’ media tools are not extinct and while ‘new’ media play a prominent role in contemporary violent extremism and terrorism, ‘old’ tools—everything from murals to magazines—continue to be utilized in tandem with the former

    Terrorist Incitement on the Internet

    Get PDF
    I organized this symposium to advance understanding of how terrorist communications drive and influence social, political, religious, civil, literary, and artistic conduct. Viewing terrorist speech through wide prisms of law, culture, and contemporary media can provide lawmakers, adjudicators, and administrators a better understanding of how to contain and prevent the exploitation of modern communication technologies to influence, recruit, and exploit others to perpetrate ideologically driven acts of violence. Undertaking such a multipronged study requires not only looking at the personal and sociological appeals that extreme ideology exerts but also considering how to create political, administrative, educational, and economic conditions to effect positive change at micro and macro levels. The deep analysis that a symposium provides can paint a more comprehensive picture to explain the effectiveness or ineffectiveness of various memes, videos, interactive websites, group chat rooms, and blogs that justify, glorify, or incite violence. Moreover, understanding the operation of terrorist groups on the internet can help to explain their organizational hierarchies

    Digital and Mobile Security for Mexican Journalists and Bloggers

    Get PDF
    A new survey of 102 journalists and bloggers in 20 Mexican states shows nearly 70 percent have been threatened or have suffered attacks because of their work. In addition, 96 percent say they know of colleagues who have been attacked. Respondents to the survey also say they view cyber-espionage and email-account cracking as the most serious digital risks they face. And while nearly all have access to and rely on the Internet, social networks, mobile phones and blogging platforms for their work, they also admit that they have little or no command of digital security tools such as encryption, use of virtual private networks (VPNs), anonymous Internet navigation and secure file removal. The results of this survey show the urgent need to introduce Mexican journalists and bloggers to new technologies and protocols and help newsrooms develop a culture of digital-security awareness to counter increasingly sophisticated threats and attacks from both governmental agencies and criminal organizations
    • 

    corecore