7 research outputs found

    Understanding misinformation on Twitter in the context of controversial issues

    Get PDF
    Social media is slowly supplementing, or even replacing, traditional media outlets such as television, newspapers, and radio. However, social media presents some drawbacks when it comes to circulating information. These drawbacks include spreading false information, rumors, and fake news. At least three main factors create these drawbacks: The filter bubble effect, misinformation, and information overload. These factors make gathering accurate and credible information online very challenging, which in turn may affect public trust in online information. These issues are even more challenging when the issue under discussion is a controversial topic. In this thesis, four main controversial topics are studied, each of which comes from a different domain. This variation of domains can give a broad view of how misinformation is manifested in social media, and how it is manifested differently in different domains. This thesis aims to understand misinformation in the context of controversial issue discussions. This can be done through understanding how misinformation is manifested in social media as well as by understanding people’s opinions towards these controversial issues. In this thesis, three different aspects of a tweet are studied. These aspects are 1) the user sharing the information, 2) the information source shared, and 3) whether specific linguistic cues can help in assessing the credibility of information on social media. Finally, the web application tool TweetChecker is used to allow online users to have a more in-depth understanding of the discussions about five different controversial health issues. The results and recommendations of this study can be used to build solutions for the problem of trustworthiness of user-generated content on different social media platforms, especially for controversial issues

    Stance classification of Twitter debates: The encryption debate as a use case

    Get PDF
    Social media have enabled a revolution in user-generated content. They allow users to connect, build community, produce and share content, and publish opinions. To better understand online users’ attitudes and opinions, we use stance classification. Stance classification is a relatively new and challenging approach to deepen opinion mining by classifying a user's stance in a debate. Our stance classification use case is tweets that were related to the spring 2016 debate over the FBI’s request that Apple decrypt a user’s iPhone. In this “encryption debate,” public opinion was polarized between advocates for individual privacy and advocates for national security. We propose a machine learning approach to classify stance in the debate, and a topic classification that uses lexical, syntactic, Twitter-specific, and argumentative features as a predictor for classifications. Models trained on these feature sets showed significant increases in accuracy relative to the unigram baseline.Ope

    Twitter as health information source : exploring the parameters affecting dementia-related tweets

    Get PDF
    Unlike other media, research on the credibility of information present on social media is limited. This limitation is even more pronounced in the case of healthcare, including dementia-related information. The purpose of this study was to identify user groups that show high bot-like behavior and profile features that deviation from typical human behavior. We collected 16,691 tweets about dementia posted over the course of a month by 8400 users. We applied inductive coding to categorize users. The BotOrNot? API was used to compute a bot score. This work provides insight into relations between user features and a bot score. We performed analysis techniques such as Kruskal-Wallis, stepwise multiple variable regression, user tweet frequency analysis and content analysis on the data. These were further evaluated for the most frequently referenced URLs in the tweets and most active users in terms of tweet frequency. Initial results indicated that the majority of users are regular users and not bots. Regression analysis revealed a clear relationship between different features. Independent variables in the user profiles such as geo_data and favourites_count, correlated with the final bot score. Similarly, content analysis of the tweets showed that the word features of bot profiles have an overall smaller percentage of words compared to regular profiles. Although this analysis is promising, it needs further enhancements

    Scientific Credibility Behind MMR Vaccination Debates on Twitter

    No full text
    This research analyzes scientific information sharing behaviors on Twitter. Over an eleven-month period, we collected tweets related to the controversy over the supposed linkage between the MMR vaccine and autism. We examined the usage pattern of scientific information resources by both sides of the ongoing debate. Then, we explored how each side uses scientific evidence in the vaccine debate. To achieve this goal, we analyzed the usage of scientific and non-scientific URLs by both polarized opinions. A domain network, which connects domains shared by the same user, was generated based on the URLs "tweeted" by users engaging in the debate in order to understand the nature of different domains and how they relate to each other. Our results showed that people with anti-vaccine attitudes linked many times to the same URL while people with pro-vaccine attitudes linked to fewer overall sources but from a wider range of resources, and that they provided fewer total links compared to people with anti-vaccine attitudes. Moreover, our results showed that vocal journalists have a huge impact on users’ opinions. This study has the potential to improve understanding about how health information is disseminated via social media by showing how scientific evidence is referenced in discussions about controversial health issues. Monitoring scientific evidence usage on social media can reveal concerns and misconceptions related to the usage of these types of evidence.Ope

    Linguistic Cues to Deception: Identifying Political Trolls on Social Media

    No full text
    The ease with which information can be shared on social media has opened it up to abuse and manipulation. One example of a manipulation campaign that has garnered much attention recently was the alleged Russian interference in the 2016 U.S. elections, with Russia accused of, among other things, using trolls and malicious accounts to spread misinformation and politically biased information. To take an in-depth look at this manipulation campaign, we collected a dataset of 13 million election-related posts shared on Twitter in 2016 by over a million distinct users. This dataset includes accounts associated with the identified Russian trolls as well as users sharing posts in the same time period on a variety of topics around the 2016 elections. To study how these trolls attempted to manipulate public opinion, we identified 49 theoretically grounded linguistic markers of deception and measured their use by troll and non-troll accounts. We show that deceptive language cues can help to accurately identify trolls, with average F1 score of 82% and recall 88%

    A Retrospective Analysis of the COVID-19 Infodemic in Saudi Arabia

    No full text
    COVID-19 has had broad disruptive effects on economies, healthcare systems, governments, societies, and individuals. Uncertainty concerning the scale of this crisis has given rise to countless rumors, hoaxes, and misinformation. Much of this type of conversation and misinformation about the pandemic now occurs online and in particular on social media platforms like Twitter. This study analysis incorporated a data-driven approach to map the contours of misinformation and contextualize the COVID-19 pandemic with regards to socio-religious-political information. This work consists of a combined system bridging quantitative and qualitative methodologies to assess how information-exchanging behaviors can be used to minimize the effects of emergent misinformation. The study revealed that the social media platforms detected the most significant source of rumors in transmitting information rapidly in the community. It showed that WhatsApp users made up about 46% of the source of rumors in online platforms, while, through Twitter, it demonstrated a declining trend of rumors by 41%. Moreover, the results indicate the second-most common type of misinformation was provided by pharmaceutical companies; however, a prevalent type of misinformation spreading in the world during this pandemic has to do with the biological war. In this combined retrospective analysis of the study, social media with varying approaches in public discourse contributes to efficient public health responses
    corecore