58 research outputs found

    Fake News Flags, Cognitive Dissonance, and the Believability of Social Media Posts

    Get PDF
    Despite the increasing relevance of how to counter fake news on social media, there are only a few studies on the merit of fake news flags. Therefore, the main goal of this research is to investigate how fake news flags and the reputation of sources affect believability and information elaboration of news content shared online. Based on the data of an online pre-study with 118 participants, we present preliminary results and how we intend to test our research model in more detail by conducting an experimental eye-tracking study. Our initial findings suggest that fake news flags have a measurable impact on the believability of news, but only partially manage to counteract the established reputation of a trusted information source. Such results serve a broader research agenda to develop systems and user interfaces that are more effective for communicating fact-checking results and debunking fake news

    Visual Attention to Fake News Flags in Social Media News Posts: An Eye Tracking Study

    Get PDF
    Given the widespread prevalence of fake news on social media, fake news warnings can play a decisive role in combating misinformation. However, research is still debating the extent to which readers of news on social media heed fake news warnings, which is important to evaluate their effectiveness. In this work, we focus on fake news flags with color gradients from green (verification) to red (warning) and investigate conditions under which they receive visual attention. In an eye tracking experiment, we assigned fake news flags to three social media post elements (user, source, news article) and manipulated the number of fake news flags that indicate a warning or verification. Our results reveal that fake news flags for the news article receive more visual attention than those for the user or source. In addition, we provide evidence that confirmation bias moderates the effect of unique flags (warning or verification) on visual attention

    Stand for Something or Fall for Everything: Predict Misinformation Spread with Stance-Aware Graph Neural Networks

    Get PDF
    Although pervasive spread of misinformation on social media platforms has become a pressing challenge, existing platform interventions have shown limited success in curbing its dissemination. In this study, we propose a stance-aware graph neural network (stance-aware GNN) that leverages users’ stances to proactively predict misinformation spread. As different user stances can form unique echo chambers, we customize four information passing paths in stance-aware GNN, while the trainable attention weights provide explainability by highlighting each structure\u27s importance. Evaluated on a real-world dataset, stance-aware GNN outperforms benchmarks by 32.65% and exceeds advanced GNNs without user stance by over 4.69%. Furthermore, the attention weights indicate that users’ opposition stances have a higher impact on their neighbors’ behaviors than supportive ones, which function as social correction to halt misinformation propagation. Overall, our study provides an effective predictive model for platforms to combat misinformation, and highlights the impact of user stances in the misinformation propagation

    The effects of user assistance systems on user perception and behavior

    Get PDF
    The rapid development of information technology (IT) is changing how people approach and interact with IT systems (Maedche et al. 2016). IT systems can increasingly support people in performing ever more complex tasks (Vtyurina and Fourney 2018). However, people's cognitive abilities have not evolved as quickly as technology (Maedche et al. 2016). Thus, different external factors (e.g., complexity or uncertainty) and internal conditions (e.g., cognitive load or stress) reduce decision quality (Acciarini et al. 2021; Caputo 2013; Hilbert 2012). User-assistance systems (UASs) can help to compensate for human weaknesses and cope with new challenges. UASs aim to improve the user's cognition and capabilities, benefiting individuals, organizations, and society. To achieve this goal, UASs collect, prepare, aggregate, analyze information, and communicate results according to user preferences (Maedche et al. 2019). This support can relieve users and improve the quality of decision-making. Using UASs offers many benefits but requires successful interaction between the user and the UAS. However, this interaction introduces social and technical challenges, such as loss of control or reduced explainability, which can affect user trust and willingness to use the UAS (Maedche et al. 2019). To realize the benefits, UASs must be developed based on an understanding and incorporation of users' needs. Users and UASs are part of a socio-technical system to complete a specific task (Maedche et al. 2019). To create a benefit from the interaction, it is necessary to understand the interaction within the socio-technical system, i.e., the interaction between the user, UAS, and task, and to align the different components. For this reason, this dissertation aims to extend the existing knowledge on UAS design by better understanding the effects and mechanisms during the interaction between UASs and users in different application contexts. Therefore, theory and findings from different disciplines are combined and new theoretical knowledge is derived. In addition, data is collected and analyzed to validate the new theoretical knowledge empirically. The findings can be used to reduce adaptation barriers and realize a positive outcome. Overall this dissertation addresses the four classes of UASs presented by Maedche et al. (2016): basic UASs, interactive UASs, intelligent UASs, and anticipating UASs. First, this dissertation contributes to understanding how users interact with basic UASs. Basic UASs do not process contextual information and interact little with the user (Maedche et al. 2016). This behavior makes basic UASs suitable for application contexts, such as social media, where little interaction is desired. Social media is primarily used for entertainment and focuses on content consumption (Moravec et al. 2018). As a result, social media has become an essential source of news but also a target for fake news, with negative consequences for individuals and society (Clarke et al. 2021; Laato et al. 2020). Thus, this thesis presents two approaches to how basic UASs can be used to reduce the negative influence of fake news. Firstly, basic UASs can provide interventions by warning users of questionable content and providing verified information but the order in which the intervention elements are displayed influences the fake news perception. The intervention elements should be displayed after the fake news story to achieve an efficient intervention. Secondly, basic UASs can provide social norms to motivate users to report fake news and thereby stop the spread of fake news. However, social norms should be used carefully, as they can backfire and reduce the willingness to report fake news. Second, this dissertation contributes to understanding how users interact with interactive UASs. Interactive UASs incorporate limited information from the application context but focus on close interaction with the user to achieve a specific goal or behavior (Maedche et al. 2016). Typical goals include more physical activity, a healthier diet, and less tobacco and alcohol consumption to prevent disease and premature death (World Health Organization 2020). To increase goal achievement, previous researchers often utilize digital human representations (DHRs) such as avatars and embodied agents to form a socio-technical relationship between the user and the interactive UAS (Kim and Sundar 2012a; Pfeuffer et al. 2019). However, understanding how the design features of an interactive UAS affect the interaction with the user is crucial, as each design feature has a distinct impact on the user's perception. Based on existing knowledge, this thesis highlights the most widely used design features and analyzes their effects on behavior. The findings reveal important implications for future interactive UAS design. Third, this dissertation contributes to understanding how users interact with intelligent UASs. Intelligent UASs prioritize processing user and contextual information to adapt to the user's needs rather than focusing on an intensive interaction with the user (Maedche et al. 2016). Thus, intelligent UASs with emotional intelligence can provide people with task-oriented and emotional support, making them ideal for situations where interpersonal relationships are neglected, such as crowd working. Crowd workers frequently work independently without any significant interactions with other people (Jäger et al. 2019). In crowd work environments, traditional leader-employee relationships are usually not established, which can have a negative impact on employee motivation and performance (Cavazotte et al. 2012). Thus, this thesis examines the impact of an intelligent UAS with leadership and emotional capabilities on employee performance and enjoyment. The leadership capabilities of the intelligent UAS lead to an increase in enjoyment but a decrease in performance. The emotional capabilities of the intelligent UAS reduce the stimulating effect of leadership characteristics. Fourth, this dissertation contributes to understanding how users interact with anticipating UASs. Anticipating UASs are intelligent and interactive, providing users with task-related and emotional stimuli (Maedche et al. 2016). They also have advanced communication interfaces and can adapt to current situations and predict future events (Knote et al. 2018). Because of these advanced capabilities anticipating UASs enable collaborative work settings and often use anthropomorphic design cues to make the interaction more intuitive and comfortable (André et al. 2019). However, these anthropomorphic design cues can also raise expectations too high, leading to disappointment and rejection if they are not met (Bartneck et al. 2009; Mori 1970). To create a successful collaborative relationship between anticipating UASs and users, it is important to understand the impact of anthropomorphic design cues on the interaction and decision-making processes. This dissertation presents a theoretical model that explains the interaction between anthropomorphic anticipating UASs and users and an experimental procedure for empirical evaluation. The experiment design lays the groundwork for empirically testing the theoretical model in future research. To sum up, this dissertation contributes to information systems knowledge by improving understanding of the interaction between UASs and users in different application contexts. It develops new theoretical knowledge based on previous research and empirically evaluates user behavior to explain and predict it. In addition, this dissertation generates new knowledge by prototypically developing UASs and provides new insights for different classes of UASs. These insights can be used by researchers and practitioners to design more user-centric UASs and realize their potential benefits

    Combating Fake News on Social Media: A Framework, Review, and Future Opportunities

    Get PDF
    Social media platforms facilitate the sharing of a vast magnitude of information in split seconds among users. However, some false information is also widely spread, generally referred to as “fake news”. This can have major negative impacts on individuals and societies. Unfortunately, people are often not able to correctly identify fake news from truth. Therefore, there is an urgent need to find effective mechanisms to fight fake news on social media. To this end, this paper adapts the Straub Model of Security Action Cycle to the context of combating fake news on social media. It uses the adapted framework to classify the vast literature on fake news to action cycle phases (i.e., deterrence, prevention, detection, and mitigation/remedy). Based on a systematic and inter-disciplinary review of the relevant literature, we analyze the status and challenges in each stage of combating fake news, followed by introducing future research directions. These efforts allow the development of a holistic view of the research frontier on fighting fake news online. We conclude that this is a multidisciplinary issue; and as such, a collaborative effort from different fields is needed to effectively address this problem

    How Does Anonymizing Crowdsourced Users\u27 Identity Affect Fact-checking on Social Media Platforms? A Regression Discontinuity Analysis

    Get PDF
    The rapid spread of misinformation on social media platforms has affected many facets of society, including presidential elections, public health, the global economy, and human well-being. Crowdsourced fact-checking is an effective method to mitigate the spread of misinformation on social media. A key factor that affects user behavior on crowdsourcing platforms is users\u27 anonymity or identity disclosure. Within the crowdsourced-based fact-checking context, it is also unknown whether and how identity anonymity affects the users\u27 fact-checking contribution performance. Leveraging a natural experiment policy happening on Twitter, we adopt regression discontinuity design to investigate two research questions: Whether and how the identity anonymity affects the crowdsourced fact-checking quantity and quality; how the characteristics of the crowdsourced users moderate the main impact. We find that the identity anonymization policy may not increase fact-checking users\u27 contribution quantity, but the fact-checking quality does increase. Our research has both theoretical and practical implications

    What Measures Can Government Institutions in Germany Take Against Digital Disinformation? A Systematic Literature Review and Ethical-Legal Discussion

    Get PDF
    Disinformation campaigns spread rapidly through social media and can cause serious harm, especially in crisis situations, ranging from confusion about how to act to a loss of trust in government institutions. Therefore, the prevention of digital disinformation campaigns represents an important research topic. However, previous research in the field of information systems focused on the technical possibilities to detect and combat disinformation, while ethical and legal perspectives have been neglected so far. In this article, we synthesize previous information systems literature on disinformation prevention measures and discuss these measures from an ethical and legal perspective. We conclude by proposing questions for future research on the prevention of disinformation campaigns from an IS, ethical, and legal perspective. In doing so, we contribute to a balanced discussion on the prevention of digital disinformation campaigns that equally considers technical, ethical, and legal issues, and encourage increased interdisciplinary collaboration in future research

    Three essays on malicious consumer deviance: The creation, dissemination, and elimination of misleading information

    Get PDF
    With the explosion of social media, consumers are gaining control in social reach and can utilize online platforms to create and share misleading information when doing so helps to meet an end. This dissertation, consisting of three separate essays, represents an attempt to address how misleading information is created, how it is disseminated, and how it can be eliminated. Essay One (Chapter 2) uses a mixed-method approach to explore the Dark Triad, proactivity, and vigilantism in driving self-created misleading information sharing. Additionally, this essay introduces a dual-process model of inoculation theory to the marketing and consumer literature that shows how consumers autoinoculate when building justification to engage in malicious behavior. This process includes both automatic and analytical components that initiate a Negative Cascade. Without a larger number of posts, these initial messages may be overlooked. However, herd inoculation can develop when a message begins to sway larger groups. Essay Two (Chapter 3) determines that authentic messages from the original poster are most believable and most likely to initiate a Negative Cascade. This confirmation through mere exposure can then initiate herd inoculation as it flows to other consumers and develops further credibility. The implicit bystander effect is active when in the presence of larger groups. Findings suggest herd inoculation may go unbroken since posters exposed to a positive counter-cascade are less likely to both participate in a forum and post positive messages. Essay Three (Chapter 4) shows that when a consumer shares a message that develops into a Negative Cascade, additional effort is required to halt the consumer herd inoculation. The studies uncover the need for an overt response from the original poster to stop future sharing of misleading information and the role of brand-enacted quarantines in the prevention of the autoinoculation of consumer vigilantes. This dissertation shows how one message can become a much bigger problem for a brand when misinformation spreads. Insights within the dissertation provide numerous outlets for future research and numerous tools and recommendations for both academics and practitioners that hope to understand how misleading information is created, disseminated, and can be eliminated

    Countering Anti-Vaccination Rumors on Twitter

    Get PDF
    This study examined the effects of the counter-rumor on changes in the belief about the anti-vaccination claim, anxiety associated with the rumor, intentions to vaccinate a child and share the rumor. Particularly, we tested whether argument strength, source expertise, as well as the recipient’s previously held attitude toward vaccination, could affect these outcomes. First, the pilot tests were conducted to check source expertise (N = 161) and argument strength (N = 74; N = 73) and select sources and messages used in the experiment. A 2 (argument strength: strong vs. weak) x 2 (expertise source: high vs. low) between-subjects factor experimental design was employed, and we conducted an online experiment (N = 400) set up in the Qualtrics. Participants were recruited via Prolific, a crowdsourcing website. The results showed that attitude toward mandatory vaccination had an impact on the change in the belief about the anti-vaccination claim. We also found that source expertise had a significant impact on the change in anxiety. Those who read the counter-rumor from CDC reported greater decrease in their anxiety than those who read the counter-rumor from a layperson user. This finding suggests that heuristic processing occurs in the reception of the anti-vaccination rumor and the counter-rumor that refutes the claim, such that people are less likely to feel anxious about the anti-vaccination rumor when they receive the counter-rumor from high expertise source. Furthermore, the results showed a significant interaction between argument strength and source expertise on the change in vaccination intention. When participants read the counter-rumor from CDC, they reported greater increase in their intention to vaccinate a child in response to the strong argument than they did in response to the weak argument. On the contrary, when they read the counter-rumor from a layperson user, the opposite pattern appeared, such that they reported greater increase in their vaccination intention in response to the weak argument than they did in response to the strong argument. This finding reveals that cue-message congruency plays a crucial role in increasing the effectiveness of the counter-rumor and promoting behavioral change. The theoretical implications of the current findings are discussed in light of cognitive dissonance theory, the dual-process model of information processing, and online rumor literature. The practical implications of the findings are further discussed with regard to designing strategies and interventions that mitigate the harmful consequences of health-related rumors
    • …
    corecore