11,441 research outputs found

    The Impact of Psycholinguistic Patterns in Discriminating between Fake News Spreaders and Fact Checkers

    Full text link
    [EN] Fake news is a threat to society. A huge amount of fake news is posted every day on social networks which is read, believed and sometimes shared by a number of users. On the other hand, with the aim to raise awareness, some users share posts that debunk fake news by using information from fact-checking websites. In this paper, we are interested in exploring the role of various psycholinguistic characteristics in differentiating between users that tend to share fake news and users that tend to debunk them. Psycholinguistic characteristics represent the different linguistic information that can be used to profile users and can be extracted or inferred from usersÂż posts. We present the CheckerOrSpreader model that uses a Convolution Neural Network (CNN) to differentiate between spreaders and checkers of fake news. The experimental results showed that CheckerOrSpreader is effective in classifying a user as a potential spreader or checker. Our analysis showed that checkers tend to use more positive language and a higher number of terms that show causality compared to spreaders who tend to use a higher amount of informal language, including slang and swear words.The works of Anastasia Giachanou and Daniel Oberski were funded by the Dutch Research Council (grant VI.Vidi.195.152). The work of Paolo Rosso was in the framework of the XAI-DisInfodemics project on eXplainable AI for disinformation and conspiracy detection during infodemics (PLEC2021-007681), funded by the Spanish Ministry of Science and Innovation, as well as IBERIFIER, the Iberian Digital Media Research and Fact-Checking Hub funded by the European Digital Media Observatory (2020-EU-IA0252).Giachanou, A.; Ghanem, BHH.; Rissola, EA.; Rosso, P.; Crestani, F.; Oberski, D. (2022). The Impact of Psycholinguistic Patterns in Discriminating between Fake News Spreaders and Fact Checkers. Data & Knowledge Engineering. 138:1-15. https://doi.org/10.1016/j.datak.2021.10196011513

    $1.00 per RT #BostonMarathon #PrayForBoston: analyzing fake content on Twitter

    Get PDF
    This study found that 29% of the most viral content on Twitter during the Boston bombing crisis were rumors and fake content.AbstractOnline social media has emerged as one of the prominent channels for dissemination of information during real world events. Malicious content is posted online during events, which can result in damage, chaos and monetary losses in the real world. We analyzed one such media i.e. Twitter, for content generated during the event of Boston Marathon Blasts, that occurred on April, 15th, 2013. A lot of fake content and malicious profiles originated on Twitter network during this event. The aim of this work is to perform in-depth characterization of what factors influenced in malicious content and profiles becoming viral. Our results showed that 29% of the most viral content on Twitter, during the Boston crisis were rumors and fake content; while 51% was generic opinions and comments; and rest was true information. We found that large number of users with high social reputation and verified accounts were responsible for spreading the fake content. Next, we used regression prediction model, to verify that, overall impact of all users who propagate the fake content at a given time, can be used to estimate the growth of that content in future. Many malicious accounts were created on Twitter during the Boston event, that were later suspended by Twitter. We identified over six thousand such user profiles, we observed that the creation of such profiles surged considerably right after the blasts occurred. We identified closed community structure and star formation in the interaction network of these suspended profiles amongst themselves

    False News On Social Media: A Data-Driven Survey

    Full text link
    In the past few years, the research community has dedicated growing interest to the issue of false news circulating on social networks. The widespread attention on detecting and characterizing false news has been motivated by considerable backlashes of this threat against the real world. As a matter of fact, social media platforms exhibit peculiar characteristics, with respect to traditional news outlets, which have been particularly favorable to the proliferation of deceptive information. They also present unique challenges for all kind of potential interventions on the subject. As this issue becomes of global concern, it is also gaining more attention in academia. The aim of this survey is to offer a comprehensive study on the recent advances in terms of detection, characterization and mitigation of false news that propagate on social media, as well as the challenges and the open questions that await future research on the field. We use a data-driven approach, focusing on a classification of the features that are used in each study to characterize false information and on the datasets used for instructing classification methods. At the end of the survey, we highlight emerging approaches that look most promising for addressing false news

    CIMTDetect: A Community Infused Matrix-Tensor Coupled Factorization Based Method for Fake News Detection

    Full text link
    Detecting whether a news article is fake or genuine is a crucial task in today's digital world where it's easy to create and spread a misleading news article. This is especially true of news stories shared on social media since they don't undergo any stringent journalistic checking associated with main stream media. Given the inherent human tendency to share information with their social connections at a mouse-click, fake news articles masquerading as real ones, tend to spread widely and virally. The presence of echo chambers (people sharing same beliefs) in social networks, only adds to this problem of wide-spread existence of fake news on social media. In this paper, we tackle the problem of fake news detection from social media by exploiting the very presence of echo chambers that exist within the social network of users to obtain an efficient and informative latent representation of the news article. By modeling the echo-chambers as closely-connected communities within the social network, we represent a news article as a 3-mode tensor of the structure - and propose a tensor factorization based method to encode the news article in a latent embedding space preserving the community structure. We also propose an extension of the above method, which jointly models the community and content information of the news article through a coupled matrix-tensor factorization framework. We empirically demonstrate the efficacy of our method for the task of Fake News Detection over two real-world datasets. Further, we validate the generalization of the resulting embeddings over two other auxiliary tasks, namely: \textbf{1)} News Cohort Analysis and \textbf{2)} Collaborative News Recommendation. Our proposed method outperforms appropriate baselines for both the tasks, establishing its generalization.Comment: Presented at ASONAM'1

    Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign

    Full text link
    Until recently, social media was seen to promote democratic discourse on social and political issues. However, this powerful communication platform has come under scrutiny for allowing hostile actors to exploit online discussions in an attempt to manipulate public opinion. A case in point is the ongoing U.S. Congress' investigation of Russian interference in the 2016 U.S. election campaign, with Russia accused of using trolls (malicious accounts created to manipulate) and bots to spread misinformation and politically biased information. In this study, we explore the effects of this manipulation campaign, taking a closer look at users who re-shared the posts produced on Twitter by the Russian troll accounts publicly disclosed by U.S. Congress investigation. We collected a dataset with over 43 million election-related posts shared on Twitter between September 16 and October 21, 2016, by about 5.7 million distinct users. This dataset included accounts associated with the identified Russian trolls. We use label propagation to infer the ideology of all users based on the news sources they shared. This method enables us to classify a large number of users as liberal or conservative with precision and recall above 90%. Conservatives retweeted Russian trolls about 31 times more often than liberals and produced 36x more tweets. Additionally, most retweets of troll content originated from two Southern states: Tennessee and Texas. Using state-of-the-art bot detection techniques, we estimated that about 4.9% and 6.2% of liberal and conservative users respectively were bots. Text analysis on the content shared by trolls reveals that they had a mostly conservative, pro-Trump agenda. Although an ideologically broad swath of Twitter users was exposed to Russian Trolls in the period leading up to the 2016 U.S. Presidential election, it was mainly conservatives who helped amplify their message

    The role of bot squads in the political propaganda on Twitter

    Get PDF
    Social Media are nowadays the privileged channel for information spreading and news checking. Unexpectedly for most of the users, automated accounts, also known as social bots, contribute more and more to this process of news spreading. Using Twitter as a benchmark, we consider the traffic exchanged, over one month of observation, on a specific topic, namely the migration flux from Northern Africa to Italy. We measure the significant traffic of tweets only, by implementing an entropy-based null model that discounts the activity of users and the virality of tweets. Results show that social bots play a central role in the exchange of significant content. Indeed, not only the strongest hubs have a number of bots among their followers higher than expected, but furthermore a group of them, that can be assigned to the same political tendency, share a common set of bots as followers. The retwitting activity of such automated accounts amplifies the presence on the platform of the hubs' messages.Comment: Under Submissio

    Social Turing Tests: Crowdsourcing Sybil Detection

    Full text link
    As popular tools for spreading spam and malware, Sybils (or fake accounts) pose a serious threat to online communities such as Online Social Networks (OSNs). Today, sophisticated attackers are creating realistic Sybils that effectively befriend legitimate users, rendering most automated Sybil detection techniques ineffective. In this paper, we explore the feasibility of a crowdsourced Sybil detection system for OSNs. We conduct a large user study on the ability of humans to detect today's Sybil accounts, using a large corpus of ground-truth Sybil accounts from the Facebook and Renren networks. We analyze detection accuracy by both "experts" and "turkers" under a variety of conditions, and find that while turkers vary significantly in their effectiveness, experts consistently produce near-optimal results. We use these results to drive the design of a multi-tier crowdsourcing Sybil detection system. Using our user study data, we show that this system is scalable, and can be highly effective either as a standalone system or as a complementary technique to current tools
    • …
    corecore