4 research outputs found

    Fake accounts detection system based on bidirectional gated recurrent unit neural network

    Get PDF
    Online social networks have become the most widely used medium to interact with friends and family, share news and important events or publish daily activities. However, this growing popularity has made social networks a target for suspicious exploitation such as the spreading of misleading or malicious information, making them less reliable and less trustworthy. In this paper, a fake account detection system based on the bidirectional gated recurrent unit (BiGRU) model is proposed. The focus has been on the content of users’ tweets to classify twitter user profile as legitimate or fake. Tweets are gathered in a single file and are transformed into a vector space using the GloVe word embedding technique in order to preserve the semantic and syntax context. Compared with the baseline models such as long short-term memory (LSTM) and convolutional neural networks (CNN), the results are promising and confirm that using GloVe with BiGRU classifier outperforms with 99.44% for accuracy and 99.25% for precision. To prove the efficiency of our approach the results obtained with GloVe were compared to Word2vec under the same conditions. Results confirm that GloVe with BiGRU classifier performs the best results for detection of fake Twitter accounts using only tweets content feature

    Multilevel User Credibility Assessment in Social Networks

    Full text link
    Online social networks are one of the largest platforms for disseminating both real and fake news. Many users on these networks, intentionally or unintentionally, spread harmful content, fake news, and rumors in fields such as politics and business. As a result, numerous studies have been conducted in recent years to assess the credibility of users. A shortcoming of most of existing methods is that they assess users by placing them in one of two categories, real or fake. However, in real-world applications it is usually more desirable to consider several levels of user credibility. Another shortcoming is that existing approaches only use a portion of important features, which downgrades their performance. In this paper, due to the lack of an appropriate dataset for multilevel user credibility assessment, first we design a method to collect data suitable to assess credibility at multiple levels. Then, we develop the MultiCred model that places users at one of several levels of credibility, based on a rich and diverse set of features extracted from users' profile, tweets and comments. MultiCred exploits deep language models to analyze textual data and deep neural models to process non-textual features. Our extensive experiments reveal that MultiCred considerably outperforms existing approaches, in terms of several accuracy measures

    An Enhanced Scammer Detection Model for Online Social Network Frauds Using Machine Learning

    Get PDF
    The prevalence of online social networking increase in the risk of social network scams or fraud. Scammers often create fake profiles to trick unsuspecting users into fraudulent activities. Therefore, it is important to be able to identify these scammer profiles and prevent fraud such as dating scams, compromised accounts, and fake profiles. This study proposes an enhanced scammer detection model that utilizes user profile attributes and images to identify scammer profiles in online social networks. The approach involves preprocessing user profile data, extracting features, and machine learning algorithms for classification. The system was tested on a dataset created specifically for this study and was found to have an accuracy rate of 94.50% with low false-positive rates. The proposed approach aims to detect scammer profiles early on to prevent online social network fraud and ensure a safer environment for society and women’s safety

    Online Social Deception and Its Countermeasures for Trustworthy Cyberspace: A Survey

    Full text link
    We are living in an era when online communication over social network services (SNSs) have become an indispensable part of people's everyday lives. As a consequence, online social deception (OSD) in SNSs has emerged as a serious threat in cyberspace, particularly for users vulnerable to such cyberattacks. Cyber attackers have exploited the sophisticated features of SNSs to carry out harmful OSD activities, such as financial fraud, privacy threat, or sexual/labor exploitation. Therefore, it is critical to understand OSD and develop effective countermeasures against OSD for building a trustworthy SNSs. In this paper, we conducted an extensive survey, covering (i) the multidisciplinary concepts of social deception; (ii) types of OSD attacks and their unique characteristics compared to other social network attacks and cybercrimes; (iii) comprehensive defense mechanisms embracing prevention, detection, and response (or mitigation) against OSD attacks along with their pros and cons; (iv) datasets/metrics used for validation and verification; and (v) legal and ethical concerns related to OSD research. Based on this survey, we provide insights into the effectiveness of countermeasures and the lessons from existing literature. We conclude this survey paper with an in-depth discussions on the limitations of the state-of-the-art and recommend future research directions in this area.Comment: 35 pages, 8 figures, submitted to ACM Computing Survey
    corecore