10,668 research outputs found

    State of the art 2015: a literature review of social media intelligence capabilities for counter-terrorism

    Get PDF
    Overview This paper is a review of how information and insight can be drawn from open social media sources. It focuses on the specific research techniques that have emerged, the capabilities they provide, the possible insights they offer, and the ethical and legal questions they raise. These techniques are considered relevant and valuable in so far as they can help to maintain public safety by preventing terrorism, preparing for it, protecting the public from it and pursuing its perpetrators. The report also considers how far this can be achieved against the backdrop of radically changing technology and public attitudes towards surveillance. This is an updated version of a 2013 report paper on the same subject, State of the Art. Since 2013, there have been significant changes in social media, how it is used by terrorist groups, and the methods being developed to make sense of it.  The paper is structured as follows: Part 1 is an overview of social media use, focused on how it is used by groups of interest to those involved in counter-terrorism. This includes new sections on trends of social media platforms; and a new section on Islamic State (IS). Part 2 provides an introduction to the key approaches of social media intelligence (henceforth ‘SOCMINT’) for counter-terrorism. Part 3 sets out a series of SOCMINT techniques. For each technique a series of capabilities and insights are considered, the validity and reliability of the method is considered, and how they might be applied to counter-terrorism work explored. Part 4 outlines a number of important legal, ethical and practical considerations when undertaking SOCMINT work

    Social Media’s impact on Intellectual Property Rights

    Get PDF
    This is a draft chapter. The final version is available in Handbook of Research on Counterfeiting and Illicit Trade, edited by Peggy E. Chaudhry, published in 2017 by Edward Elgar Publishing Ltd, https://doi.org/10.4337/9781785366451. This material is for private use only, and cannot be used for any other purpose without further permission of the publisher.Peer reviewe

    Operator-based approaches to harm minimisation in gambling: summary, review and future directions

    Get PDF
    In this report we give critical consideration to the nature and effectiveness of harm minimisation in gambling. We identify gambling-related harm as both personal (e.g., health, wellbeing, relationships) and economic (e.g., financial) harm that occurs from exceeding one’s disposable income or disposable leisure time. We have elected to use the term ‘harm minimisation’ as the most appropriate term for reducing the impact of problem gambling, given its breadth in regard to the range of goals it seeks to achieve, and the range of means by which they may be achieved. The extent to which an employee can proactively identify a problem gambler in a gambling venue is uncertain. Research suggests that indicators do exist, such as sessional information (e.g., duration or frequency of play) and negative emotional responses to gambling losses. However, the practical implications of requiring employees to identify and interact with customers suspected of experiencing harm are questionable, particularly as the employees may not possess the clinical intervention skills which may be necessary. Based on emerging evidence, behavioural indicators identifiable in industryheld data, could be used to identify customers experiencing harm. A programme of research is underway in Great Britain and in other jurisdiction

    From Social Data Mining to Forecasting Socio-Economic Crisis

    Full text link
    Socio-economic data mining has a great potential in terms of gaining a better understanding of problems that our economy and society are facing, such as financial instability, shortages of resources, or conflicts. Without large-scale data mining, progress in these areas seems hard or impossible. Therefore, a suitable, distributed data mining infrastructure and research centers should be built in Europe. It also appears appropriate to build a network of Crisis Observatories. They can be imagined as laboratories devoted to the gathering and processing of enormous volumes of data on both natural systems such as the Earth and its ecosystem, as well as on human techno-socio-economic systems, so as to gain early warnings of impending events. Reality mining provides the chance to adapt more quickly and more accurately to changing situations. Further opportunities arise by individually customized services, which however should be provided in a privacy-respecting way. This requires the development of novel ICT (such as a self- organizing Web), but most likely new legal regulations and suitable institutions as well. As long as such regulations are lacking on a world-wide scale, it is in the public interest that scientists explore what can be done with the huge data available. Big data do have the potential to change or even threaten democratic societies. The same applies to sudden and large-scale failures of ICT systems. Therefore, dealing with data must be done with a large degree of responsibility and care. Self-interests of individuals, companies or institutions have limits, where the public interest is affected, and public interest is not a sufficient justification to violate human rights of individuals. Privacy is a high good, as confidentiality is, and damaging it would have serious side effects for society.Comment: 65 pages, 1 figure, Visioneer White Paper, see http://www.visioneer.ethz.c

    Delegated Dictatorship: Examining the State and Market Forces behind Information Control in China

    Full text link
    A large body of literature devoted to analyzing information control in China concludes that we find imperfect censorship because the state has adopted a minimalist strategy for information control. In other words, the state is deliberately selective about the content that it censors. While some claim that the government limits its attention to the most categorically harmful content—content that may lead to mobilization—others suggest that the state limits the scope of censorship to allow space for criticism which enables the state to gather information about popular grievances or badly performing local cadres. In contrast, I argue that imperfect censorship in China results from a precise and covert implementation of the government's maximalist strategy for information control. The state is intolerant of government criticisms, discussions of collective action, non-official coverage of crime, and a host of other types of information that may challenge state authority and legitimacy. This strategy produces imperfect censorship because the state prefers to implement it covertly, and thus, delegates to private companies, targets repression, and engages in astroturfing to reduce the visibility and disruptiveness of information control tactics. This both insulates the state from popular backlash and increases the effectiveness of its informational interventions. I test the hypotheses generated from this theory by analyzing a custom dataset of censorship logs from a popular social media company, Sina Weibo. These logs measure the government's intent about what content should and should not be censored. A systematic analysis of content targeted for censorship demonstrates the broadness of the government's censorship agenda. These data also show that delegation to private companies softens and refines the state's informational interventions so that the government's broad agenda is maximally implemented while minimizing popular backlash that would otherwise threaten the effectiveness of its informational interventions.PHDPolitical ScienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147514/1/blakeapm_1.pd

    "HOT" ChatGPT: The promise of ChatGPT in detecting and discriminating hateful, offensive, and toxic comments on social media

    Full text link
    Harmful content is pervasive on social media, poisoning online communities and negatively impacting participation. A common approach to address this issue is to develop detection models that rely on human annotations. However, the tasks required to build such models expose annotators to harmful and offensive content and may require significant time and cost to complete. Generative AI models have the potential to understand and detect harmful content. To investigate this potential, we used ChatGPT and compared its performance with MTurker annotations for three frequently discussed concepts related to harmful content: Hateful, Offensive, and Toxic (HOT). We designed five prompts to interact with ChatGPT and conducted four experiments eliciting HOT classifications. Our results show that ChatGPT can achieve an accuracy of approximately 80% when compared to MTurker annotations. Specifically, the model displays a more consistent classification for non-HOT comments than HOT comments compared to human annotations. Our findings also suggest that ChatGPT classifications align with provided HOT definitions, but ChatGPT classifies "hateful" and "offensive" as subsets of "toxic." Moreover, the choice of prompts used to interact with ChatGPT impacts its performance. Based on these in-sights, our study provides several meaningful implications for employing ChatGPT to detect HOT content, particularly regarding the reliability and consistency of its performance, its understand-ing and reasoning of the HOT concept, and the impact of prompts on its performance. Overall, our study provides guidance about the potential of using generative AI models to moderate large volumes of user-generated content on social media

    Internet Giants as Quasi-Governmental Actors and the Limits of Contractual Consent

    Get PDF
    Although the government’s data-mining program relied heavily on information and technology that the government received from private companies, relatively little of the public outrage generated by Edward Snowden’s revelations was directed at those private companies. We argue that the mystique of the Internet giants and the myth of contractual consent combine to mute criticisms that otherwise might be directed at the real data-mining masterminds. As a result, consumers are deemed to have consented to the use of their private information in ways that they would not agree to had they known the purposes to which their information would be put and the entities – including the federal government – with whom their information would be shared. We also call into question the distinction between governmental actors and private actors in this realm, as the Internet giants increasingly exploit contractual mechanisms to operate with quasi-governmental powers in their relations with consumers. As regulators and policymakers focus on how to better protect consumer data, we propose that solutions that rely upon consumer permission adopt a more exacting and limited concept of the consent required before private entities may collect or make use of consumer’s information where such uses touch upon privacy interests

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
    corecore