397 research outputs found

    Promotional Campaigns in the Era of Social Platforms

    Get PDF
    The rise of social media has facilitated the diffusion of information to more easily reach millions of users. While some users connect with friends and organically share information and opinions on social media, others have exploited these platforms to gain influence and profit through promotional campaigns and advertising. The existence of promotional campaigns contributes to the spread of misleading information, spam, and fake news. Thus, these campaigns affect the trustworthiness and reliability of social media and render it as a crowd advertising platform. This dissertation studies the existence of promotional campaigns in social media and explores different ways users and bots (i.e. automated accounts) engage in such campaigns. In this dissertation, we design a suite of detection, ranking, and mining techniques. We study user-generated reviews in online e-commerce sites, such as Google Play, to extract campaigns. We identify cooperating sets of bots and classify their interactions in social networks such as Twitter, and rank the bots based on the degree of their malevolence. Our study shows that modern online social interactions are largely modulated by promotional campaigns such as political campaigns, advertisement campaigns, and incentive-driven campaigns. We measure how these campaigns can potentially impact information consumption of millions of social media users

    Exposing Pernicious Bots in Twitter Utilizing User Profile Attributes and Machine Learning

    Get PDF
    With the rampant usage of social media, fraudsters try to employ malevolent social bots that tend to generate counterfeit tweets and try to establish relationships with other users on the social media by acting like followers or try to generate multiple counterfeit accounts that get involved in malevolent activities. They also tend to post malevolent URLs that are used to navigate genuine users to malevolent web servers. Thus it is very essential to differentiate the bot accounts from genuine accounts. It is observed that bots can be identified by analyzing the profile based featured and URL features that they post such as redirected URL, spam data, frequency of URL sharing etc than social features. In this project, we suggest a novel approach using Deep Learning techniques that uses profile based features for exposing pernicious bots on social networks. We feed the Twitter data set to the above-mentioned model and observe that it gives better performance than other algorithms. we also tried to build a web application   that can show that the above approach gives better performance when compared to other existing models

    Fake Account Identification Using Machine Learning Approaches Integrated with Adaptive Particle Swarm Optimization

    Get PDF
     It is customary for humans, bots, and other automated systems to generate new user accounts by utilizing pilfered or otherwise deceitful personal information. They are employed in deceitful activities such as phishing and identity theft, as well as in spreading damaging rumors. An somebody with malevolent intent may generate a substantial number of counterfeit accounts, ranging from hundreds to thousands, with the aim of disseminating their harmful actions to as many authentic users as possible. Users can get a wealth of knowledge from social networking networks. Malicious individuals are readily encouraged to take use of this vast collection of social media information. These cybercriminals fabricate fictitious identities and disseminate meaningless stuff. An essential aspect of using social media networks is the process of discerning counterfeit profiles. This study presents a machine learning approach to detect fraudulent Instagram profiles. This strategy employed the attribute-selection technique, adaptive particle swarm optimization, and feature-elimination recursion. The results indicate that the suggested adaptive particle swarm optimization method surpasses RFE in terms of accuracy, recall, and F measure

    SpADe: Multi-Stage Spam Account Detection for Online Social Networks

    Get PDF
    In recent years, Online Social Networks (OSNs) have radically changed the way people communicate. The most widely used platforms, such as Facebook, Youtube, and Instagram, claim more than one billion monthly active users each. Beyond these, news-oriented micro-blogging services, e.g., Twitter, are daily accessed by more than 120 million users sharing contents from all over the world. Unfortunately, legitimate users of the OSNs are mixed with malicious ones, which are interested in spreading unwanted, misleading, harmful, or discriminatory content. Spam detection in OSNs is generally approached by considering the characteristics of the account under analysis, its connection with the rest of the network, as well as data and metadata representing the content shared. However, obtaining all this information can be computationally expensive, or even unfeasible, on massive networks. Driven by these motivations, in this paper we propose SpADe, a multi-stage Spam Account Detection algorithm with reject option, whose purpose is to exploit less costly features at the early stages, while progressively extracting more complex information only for those accounts that are difficult to classify. Experimental evaluation shows the effectiveness of the proposed algorithm compared to single-stage approaches, which are much more complex in terms of features processing and classification time

    Detection of suspicious URLs in online social networks using supervised machine learning algorithms

    Get PDF
    This thesis proposes the use of several supervised machine learning classification models that were built to detect the distribution of malicious content in OSNs. The main focus was on ensemble learning algorithms such as Random Forest, gradient boosting trees, extra trees, and XGBoost. Features were used to identify social network posts that contain malicious URLs derived from several sources, such as domain WHOIS record, web page content, URL lexical and redirection data, and Twitter metadata. The thesis describes a systematic analysis of the hyper-parameters of tree-based models. The impact of key parameters, such as the number of trees, depth of trees and minimum size of leaf nodes on classification performance, was assessed. The results show that controlling the complexity of Random Forest classifiers applied to social media spam is essential to avoid overfitting and optimise performance. The model complexity could be reduced by removing uninformative features, as the complexity they add to the model is greater than the advantages they give to the model to make decisions. Moreover, model-combining methods were tested, which are the voting and stacking methods. Both show advantages and disadvantages; however, in general, they appear to provide a statistically significant improvement in comparison to the highest singular model. The critical benefit of applying the stacking method to automate the model selection process is that it is effective in giving more weight to more topperforming models and less affected by weak ones. Finally, 'SuspectRate', an online malicious URL detection system, was built to offer a service to give a suspicious probability of tweets with attached URLs. A key feature of this system is that it can dynamically retrain and expand current models

    Analysing and detecting twitter spam

    Full text link
    Through in-depth data-drive analysis, we provide insights on deceptive information in Twitter spam, spammers\u27 behaviours and emerging spamming strategies. We also firstly identify and solve the &quot;spam drift&quot; problem. Online social network providers can adopt our findings and proposed scheme to re-design their detection system to improve its efficiency and accuracy.<br /

    Better Safe Than Sorry: An Adversarial Approach to Improve Social Bot Detection

    Full text link
    The arm race between spambots and spambot-detectors is made of several cycles (or generations): a new wave of spambots is created (and new spam is spread), new spambot filters are derived and old spambots mutate (or evolve) to new species. Recently, with the diffusion of the adversarial learning approach, a new practice is emerging: to manipulate on purpose target samples in order to make stronger detection models. Here, we manipulate generations of Twitter social bots, to obtain - and study - their possible future evolutions, with the aim of eventually deriving more effective detection techniques. In detail, we propose and experiment with a novel genetic algorithm for the synthesis of online accounts. The algorithm allows to create synthetic evolved versions of current state-of-the-art social bots. Results demonstrate that synthetic bots really escape current detection techniques. However, they give all the needed elements to improve such techniques, making possible a proactive approach for the design of social bot detection systems.Comment: This is the pre-final version of a paper accepted @ 11th ACM Conference on Web Science, June 30-July 3, 2019, Boston, U
    • …
    corecore