734 research outputs found

    False News On Social Media: A Data-Driven Survey

    Full text link
    In the past few years, the research community has dedicated growing interest to the issue of false news circulating on social networks. The widespread attention on detecting and characterizing false news has been motivated by considerable backlashes of this threat against the real world. As a matter of fact, social media platforms exhibit peculiar characteristics, with respect to traditional news outlets, which have been particularly favorable to the proliferation of deceptive information. They also present unique challenges for all kind of potential interventions on the subject. As this issue becomes of global concern, it is also gaining more attention in academia. The aim of this survey is to offer a comprehensive study on the recent advances in terms of detection, characterization and mitigation of false news that propagate on social media, as well as the challenges and the open questions that await future research on the field. We use a data-driven approach, focusing on a classification of the features that are used in each study to characterize false information and on the datasets used for instructing classification methods. At the end of the survey, we highlight emerging approaches that look most promising for addressing false news

    A Comparison of False-Information Policies in Five Countries before and during the COVID-19 Pandemic

    Get PDF
    This study analyzes five countries’ false-information policies before and during the COVID-19 pandemic. Building upon existing discussions of regulation models, this paper uses a qualitative, comparative case study method to unpack the characteristics of false-information policies in each country. The before-after comparisons show that each country has a unique evolving path of false-information regulation and that the state has enhanced or attempted to enhance its role in battling against the infodemic during the pandemic. The regulatory practices are a dynamic process and involve not only government and social media platforms but also multiple other actors, which is leading to more complex practices and blurring the boundary of existing models. We discuss the limitation of existing regulation models and suggest a relational perspective to understand the underlying relations between the state, platforms, and other stakeholders

    The international development of the ‘Social Norms’ approach to drug education and prevention

    Get PDF
    Binge drinking has sparked considerable interest and concern. However despite this interest little is known about the lay understanding of binge drinking and whether there are differences in understanding by gender, age and level of deprivation. Aims: This study investigated the beliefs and attitudes of a sample in the Inverclyde area to binge drinking. Methods: Using both cluster and quota sampling, 586 subjects completed a structured interview, using open questions about their beliefs on binge drinking and was it a problem generally and locally. Findings: Definitions of binge drinking tended to concentrate on intoxication and some described a dependent drinking pattern. Causes and solutions offered were varied but pointed up levels of deprivation in respect of jobs and entertainment. More subjects regarded binge drinking as a problem in society than locally, which is consistent with research suggesting that misperceptions of others’ drinking increases with social distance. Differences in beliefs were found by age and level of deprivation but not gender. It was marked that no subject offered the ‘official’ definition of bingeing or even an approximation of it. Conclusions: Further research is required if future mass media campaigns and interventions are to be relevant to the population

    Publish, Share, Re-Tweet, and Repeat

    Get PDF
    New technologies allow users to communicate ideas to a broad audience easily and quickly, affecting the way ideas are interpreted and their credibility. Each and every social network user can simply click “share” or “retweet” and automatically republish an existing post and expose a new message to a wide audience. The dissemination of ideas can raise public awareness about important issues and bring about social, political, and economic change. Yet, digital sharing also provides vast opportunities to spread false rumors, defamation, and Fake News stories at the thoughtless click of a button. The spreading of falsehoods can severely harm the reputation of victims, erode democracy, and infringe on the public interest. Holding the original publisher accountable and collecting damages from him offers very limited redress since the harmful expression can continue to spread. How should the law respond to this phenomenon and who should be held accountable? Drawing on multidisciplinary social science scholarship from network theory and cognitive psychology, this Article describes how falsehoods spread on social networks, the different motivations to disseminate them, the gravity of the harm they can inflict, and the likelihood of correcting false information once it has been distributed in this setting. This Article will also describe the top-down influence of social media platform intermediaries, and how it enhances dissemination by exploiting users’ cognitive biases and creating social cues that encourage users to share information. Understanding how falsehoods spread is a first step towards providing a framework for meeting this challenge. The Article argues that it is high time to rethink intermediary duties and obligations regarding the dissemination of falsehoods. It examines a new perspective for mitigating the harm caused by the dissemination of falsehood. The Article advocates harnessing social network intermediaries to meet the challenge of dissemination from the stage of platform design. It proposes innovative solutions for mitigating careless, irresponsible sharing of false rumors. The first solution focuses on a platform’s accountability for influencing user decision-making processes. “Nudges” can discourage users from thoughtless sharing of falsehoods and promote accountability ex ante. The second solution focuses on allowing effective ex post facto removal of falsehoods, defamation, and fake news stories from all profiles and locations where they have spread. Shaping user choices and designing platforms is value laden, reflecting the platform’s particular set of preferences, and should not be taken for granted. Therefore, this Article proposes ways to incentivize intermediaries to adopt these solutions and mitigate the harm generated by the spreading of falsehoods. Finally, the Article addresses the limitations of the proposed solutions yet still concludes that they are more effective than current legal practices

    Use of a controlled experiment and computational models to measure the impact of sequential peer exposures on decision making

    Full text link
    It is widely believed that one's peers influence product adoption behaviors. This relationship has been linked to the number of signals a decision-maker receives in a social network. But it is unclear if these same principles hold when the pattern by which it receives these signals vary and when peer influence is directed towards choices which are not optimal. To investigate that, we manipulate social signal exposure in an online controlled experiment using a game with human participants. Each participant in the game makes a decision among choices with differing utilities. We observe the following: (1) even in the presence of monetary risks and previously acquired knowledge of the choices, decision-makers tend to deviate from the obvious optimal decision when their peers make similar decision which we call the influence decision, (2) when the quantity of social signals vary over time, the forwarding probability of the influence decision and therefore being responsive to social influence does not necessarily correlate proportionally to the absolute quantity of signals. To better understand how these rules of peer influence could be used in modeling applications of real world diffusion and in networked environments, we use our behavioral findings to simulate spreading dynamics in real world case studies. We specifically try to see how cumulative influence plays out in the presence of user uncertainty and measure its outcome on rumor diffusion, which we model as an example of sub-optimal choice diffusion. Together, our simulation results indicate that sequential peer effects from the influence decision overcomes individual uncertainty to guide faster rumor diffusion over time. However, when the rate of diffusion is slow in the beginning, user uncertainty can have a substantial role compared to peer influence in deciding the adoption trajectory of a piece of questionable information

    Countering Anti-Vaccination Rumors on Twitter

    Get PDF
    This study examined the effects of the counter-rumor on changes in the belief about the anti-vaccination claim, anxiety associated with the rumor, intentions to vaccinate a child and share the rumor. Particularly, we tested whether argument strength, source expertise, as well as the recipient’s previously held attitude toward vaccination, could affect these outcomes. First, the pilot tests were conducted to check source expertise (N = 161) and argument strength (N = 74; N = 73) and select sources and messages used in the experiment. A 2 (argument strength: strong vs. weak) x 2 (expertise source: high vs. low) between-subjects factor experimental design was employed, and we conducted an online experiment (N = 400) set up in the Qualtrics. Participants were recruited via Prolific, a crowdsourcing website. The results showed that attitude toward mandatory vaccination had an impact on the change in the belief about the anti-vaccination claim. We also found that source expertise had a significant impact on the change in anxiety. Those who read the counter-rumor from CDC reported greater decrease in their anxiety than those who read the counter-rumor from a layperson user. This finding suggests that heuristic processing occurs in the reception of the anti-vaccination rumor and the counter-rumor that refutes the claim, such that people are less likely to feel anxious about the anti-vaccination rumor when they receive the counter-rumor from high expertise source. Furthermore, the results showed a significant interaction between argument strength and source expertise on the change in vaccination intention. When participants read the counter-rumor from CDC, they reported greater increase in their intention to vaccinate a child in response to the strong argument than they did in response to the weak argument. On the contrary, when they read the counter-rumor from a layperson user, the opposite pattern appeared, such that they reported greater increase in their vaccination intention in response to the weak argument than they did in response to the strong argument. This finding reveals that cue-message congruency plays a crucial role in increasing the effectiveness of the counter-rumor and promoting behavioral change. The theoretical implications of the current findings are discussed in light of cognitive dissonance theory, the dual-process model of information processing, and online rumor literature. The practical implications of the findings are further discussed with regard to designing strategies and interventions that mitigate the harmful consequences of health-related rumors

    Institutional vs. Non-institutional use of Social Media during Emergency Response: A Case of Twitter in 2014 Australian Bush Fire

    Full text link
    © 2017, Springer Science+Business Media, LLC. Social media plays a significant role in rapid propagation of information when disasters occur. Among the four phases of disaster management life cycle: prevention, preparedness, response, and recovery, this paper focuses on the use of social media during the response phase. It empirically examines the use of microblogging platforms by Emergency Response Organisations (EROs) during extreme natural events, and distinguishes the use of Twitter by EROs from digital volunteers during a fire hazard occurred in Australia state of Victoria in early February 2014. We analysed 7982 tweets on this event. While traditionally theories such as World System Theory and Institutional Theory focus on the role of powerful institutional information outlets, we found that platforms like Twitter challenge such notion by sharing the power between large institutional (e.g. EROs) and smaller non-institutional players (e.g. digital volunteers) in the dissemination of disaster information. Our results highlight that both large EROs and individual digital volunteers proactively used Twitter to disseminate and distribute fire related information. We also found that the contents of tweets were more informative than directive, and that while the total number of messages posted by top EROs was higher than the non-institutional ones, non-institutions presented a greater number of retweets

    A Study of Norm Formation Dynamics in Online Crowds

    Get PDF
    In extreme events such as the Egyptian 2011 uprising, online social media technology enables many people from heterogeneous backgrounds to interact in response to the crisis. This form of collectivity (an online crowd) is usually formed spontaneously with minimum constraints concerning the relationships among the members. Theories of collective behavior suggest that the patterns of behavior in a crowd are not just a set of random acts. Instead they evolve toward a normative stage. Because of the uncertainty of the situations people are more likely to search for norms. Understanding the process of norm formation in online social media is beneficial for any organization that seeks to establish a norm or understand how existing norms emerged. In this study, I propose a longitudinal data-driven approach to investigate the dynamics of norm formation in online crowds. In the research model, the formation of recurrent behaviors (behavior regularities) is recognized as the first step toward norm formation; and the focus of this study is on the first step. The dataset is the tweets posted during the Egyptian 2011 movement. The results show that the social structure has impact on the formation of behavioral regularities, which is the first step of norm formation. Also, the results suggest that accounting for different roles in the crowd will uncover a more detailed view of norm and help to define emergent norm from a new perspective. The outcome indicates that there are significant differences in behavioral regularities between different roles formed over time. For instance, the users of the same role tend to practice more reciprocity inside their role group rather than outside of their role. I contribute to theory first by extending the implications of current relevant theories to the context of online social media, and second by investigating theoretical implications through an analysis of empirical real-life data. In this dissertation, I review prior studies and provide the theoretical foundation for my research. Then I discuss the research method and the preliminary results from the pilot studies. I present the results from the analysis and provide a discussion and conclusion

    Persuading users into verifying online fake news

    Get PDF
    Abstract. Checking authenticity of fake news before sharing online can reduce spread of misinformation. But fact-checking requires cognitive and psychological effort, which people are often not willing to give. Some fact-checking methods might even be counterproductive, entrenching people into their deeply held beliefs. Numerous online fact-checking services have emerged recently which verify false claims to address the issue. While these services are quite efficient technologically, they seriously overlook human behavioral factors associated with fake news. Persuasive systems have been proven successful in attitudinal and behavioral changes, which could be applied here as behavioral interventions for fact-checking. A review of current fact-checking services showed that they significantly lack persuasive features, resulting in a passive and linear user experience. Findings from cognitive science and persuasion literature paved way for development of a fact-checking mobile application that would encourage users into regular fact-checking. Qualitative and quantitative evaluation of the artifact showed promise of persuasion in combating fake news. Social support persuasive features were found most effective, followed by tunnelling and self-monitoring. Implications of these findings and future research directions are discussed
    • 

    corecore