28,713 research outputs found

    The Fake News Spreading Plague: Was it Preventable?

    Get PDF
    In 2010, a paper entitled "From Obscurity to Prominence in Minutes: Political Speech and Real-time search" won the Best Paper Prize of the Web Science 2010 Conference. Among its findings were the discovery and documentation of what was termed a "Twitter-bomb", an organized effort to spread misinformation about the democratic candidate Martha Coakley through anonymous Twitter accounts. In this paper, after summarizing the details of that event, we outline the recipe of how social networks are used to spread misinformation. One of the most important steps in such a recipe is the "infiltration" of a community of users who are already engaged in conversations about a topic, to use them as organic spreaders of misinformation in their extended subnetworks. Then, we take this misinformation spreading recipe and indicate how it was successfully used to spread fake news during the 2016 U.S. Presidential Election. The main differences between the scenarios are the use of Facebook instead of Twitter, and the respective motivations (in 2010: political influence; in 2016: financial benefit through online advertising). After situating these events in the broader context of exploiting the Web, we seize this opportunity to address limitations of the reach of research findings and to start a conversation about how communities of researchers can increase their impact on real-world societal issues

    Online misinformation about climate change

    Get PDF
    This is the final version. Available from the publisher via the DOI in this record.Policymakers, scholars, and practitioners have all called attention to the issue of misinformation in the climate change debate. But what is climate change misinformation, who is involved, how does it spread, why does it matter, and what can be done about it? Climate change misinformation is closely linked to climate change skepticism, denial, and contrarianism. A network of actors are involved in financing, producing, and amplifying misinformation. Once in the public domain, characteristics of online social networks, such as homophily, polarization, and echo chambers—characteristics also found in climate change debate—provide fertile ground for misinformation to spread. Underlying belief systems and social norms, as well as psychological heuristics such as confirmation bias, are further factors which contribute to the spread of misinformation. A variety of ways to understand and address misinformation, from a diversity of disciplines, are discussed. These include educational, technological, regulatory, and psychological-based approaches. No single approach addresses all concerns about misinformation, and all have limitations, necessitating an interdisciplinary approach to tackle this multifaceted issue. Key research gaps include understanding the diffusion of climate change misinformation on social media, and examining whether misinformation extends to climate alarmism, as well as climate denial. This article explores the concepts of misinformation and disinformation and defines disinformation to be a subset of misinformation. A diversity of disciplinary and interdisciplinary literature is reviewed to fully interrogate the concept of misinformation—and within this, disinformation—particularly as it pertains to climate change. This article is categorized under:. Perceptions, Behavior, and Communication of Climate Change > Communication.Economic and Social Research Council (ESRC

    AN ANALYSIS OF COVID-19 MISINFORMATION ON THE TELEGRAM SOCIAL NETWORK

    Get PDF
    The proliferation of misinformation groups and users on social networks has illustrated the need for targeted misinformation detection, analysis, and countering techniques. For example, in 2018, Twitter disclosed research that identified more than 50,000 malicious accounts linked to foreign-backed agencies that used the social network to spread propaganda and influence voters during the 2016 U.S. presidential election. Twitter also began removing and labeling content as misinformation during the 2020 U.S. election, which led to an influx of users to social networks, such as Telegram. Telegram’s dedication to free speech and privacy is an attractive platform for misinformation groups and thus provides a unique opportunity to observe and measure how unabated ideas and sentiments evolve and spread. In this thesis, we create a dataset by crawling channels and groups in Telegram that are centered around COVID-19 and vaccine conversations. For analysis, we first analyze the topics and sentiments of the data using machine learning models. Next, we analyze the time series relationship between sentiment and topic trends. Then, we look for topic relationships by clustering performed on topic-based graph networks. Lastly, we cluster channels using document vectors to identify super-groups of related conversations. We conclude that Telegram communities risk producing echo chamber effects and are potential targets for external actors to embed and grow misinformation without hindrance.Lieutenant, United States NavyApproved for public release. Distribution is unlimited

    Mitigating Misinformation Spreading in Social Networks Via Edge Blocking

    Full text link
    The wide adoption of social media platforms has brought about numerous benefits for communication and information sharing. However, it has also led to the rapid spread of misinformation, causing significant harm to individuals, communities, and society at large. Consequently, there has been a growing interest in devising efficient and effective strategies to contain the spread of misinformation. One popular countermeasure is blocking edges in the underlying network. We model the spread of misinformation using the classical Independent Cascade model and study the problem of minimizing the spread by blocking a given number of edges. We prove that this problem is computationally hard, but we propose an intuitive community-based algorithm, which aims to detect well-connected communities in the network and disconnect the inter-community edges. Our experiments on various real-world social networks demonstrate that the proposed algorithm significantly outperforms the prior methods, which mostly rely on centrality measures

    Reducing Misinformation on Social Media Networks

    Get PDF
    This study focuses on examining methods that can potentially reduce the spread of misinformation on major social media networks (SMN) such as Facebook and Twitter. Research finding ways to control the spread of misinformation on SMNs has been emergent. Prior research examined a SMN feature called \u27related articles\u27 to provide context directly under SMN posts with potentially misinformed content about controversial topics. Other research examined how SMN users were encouraged to consume online news sources outside their comfort zone when participating within a socialized environment. Each of these features separately were found to significantly reduced misperceptions of SMN users. In this study, we examine how both of these features can work together to reduce the spread of misinformation. We use an experimental survey to measure the effectiveness of SMN features in correcting misperceptions of SMN users and provide results to inform government, cybersecurity firms, social media companies, and SMN users

    Efficient link cuts in online social networks

    Get PDF
    Due to the huge popularity of online social networks, many researchers focus on adding links, e.g., link prediction to help friend recommendation. So far, no research has been performed on link cuts. However, the spread of malware and misinformation can cause havoc and hence it is interesting to see how to cut links such that malware and misinformation will not run rampant. In fact, many online social networks can be modelled as undirected graphs with nodes represents users and edges stands for relationships between users. In this paper, we investigate different strategies to cut links among different users in undirected graphs so that the speed of virus and misinformation spread can be slowed down the most or even cut off. Our algorithm is very flexible and can be applied to other networks. For example, it can be applied to email networks to stop the spread of viruses and spam emails; it can also be used in neural networks to stop the diffusion of worms and diseases. Two measures are chosen to evaluate the performance of these strategies: Average Inverse of Shortest Path Length (AIPL) and Rumor Saturation Rate (RSR). AIPL measures the communication efficiency of the whole graph while RSR checks the percentage of users receiving information within a certain time interval. Compared to AIPL, RSR is an even better measure as it concentrates on some specific rumors' spread in online networks. Our experiments are performed on both synthetic data and Facebook data. According to the evaluation on the two measures, it turns out that our algorithm performs better than random cuts and different strategies can have better performance in their suitable situations

    Misinformation in Online Health Communities

    Get PDF
    Spread of wrong information can be a serious deterrent to information system use especially in case of online community which typically have thousands of end users. However, literature has been weak in linking the prevalence of health misinformation on online social networks to the factors contributing to misinformation. This study seeks to reduce this gap by examining the impact of thread characteristics and user characteristics on the extent of misinformation in online social networking forum related to Parkinson\u27s disease. Our findings show that the correctness of a post is affected by clarity of the thread question, information richness and the user potential for making useful contributions
    • …
    corecore