63 research outputs found

    Combating Attacks and Abuse in Large Online Communities

    Get PDF
    Internet users today are connected more widely and ubiquitously than ever before. As a result, various online communities are formed, ranging from online social networks (Facebook, Twitter), to mobile communities (Foursquare, Waze), to content/interests based networks (Wikipedia, Yelp, Quora). While users are benefiting from the ease of access to information and social interactions, there is a growing concern for users' security and privacy against various attacks such as spam, phishing, malware infection and identity theft. Combating attacks and abuse in online communities is challenging. First, today’s online communities are increasingly dependent on users and user-generated content. Securing online systems demands a deep understanding of the complex and often unpredictable human behaviors. Second, online communities can easily have millions or even billions of users, which requires the corresponding security mechanisms to be highly scalable. Finally, cybercriminals are constantly evolving to launch new types of attacks. This further demands high robustness of security defenses. In this thesis, we take concrete steps towards measuring, understanding, and defending against attacks and abuse in online communities. We begin with a series of empirical measurements to understand user behaviors in different online services and the uniquesecurity and privacy challenges that users are facing with. This effort covers a broad set of popular online services including social networks for question and answering (Quora), anonymous social networks (Whisper), and crowdsourced mobile communities (Waze). Despite the differences of specific online communities, our study provides a first look at their user activity patterns based on empirical data, and reveals the need for reliable mechanisms to curate user content, protect privacy, and defend against emerging attacks. Next, we turn our attention to attacks targeting online communities, with focus on spam campaigns. While traditional spam is mostly generated by automated software, attackers today start to introduce "human intelligence" to implement attacks. This is maliciouscrowdsourcing (or crowdturfing) where a large group of real-users are organized to carry out malicious campaigns, such as writing fake reviews or spreading rumors on social media. Using collective human efforts, attackers can easily bypass many existing defenses (e.g.,CAPTCHA). To understand the ecosystem of crowdturfing, we first use measurements to examine their detailed campaign organization, workers and revenue. Based on insights from empirical data, we develop effective machine learning classifiers to detect crowdturfingactivities. In the meantime, considering the adversarial nature of crowdturfing, we also build practical adversarial models to simulate how attackers can evade or disrupt machine learning based defenses. To aid in this effort, we next explore using user behavior models to detect a wider range of attacks. Instead of making assumptions about attacker behavior, our idea is to model normal user behaviors and capture (malicious) behaviors that are deviated from norm. In this way, we can detect previously unknown attacks. Our behavior model is based on detailed clickstream data, which are sequences of click events generated by users when using the service. We build a similarity graph where each user is a node and the edges are weightedby clickstream similarity. By partitioning this graph, we obtain "clusters" of users with similar behaviors. We then use a small set of known good users to "color" these clusters to differentiate the malicious ones. This technique has been adopted by real-world social networks (Renren and LinkedIn), and already detected unexpected attacks. Finally, we extend clickstream model to understanding more-grained behaviors of attackers (and real users), and tracking how user behavior changes over time. In summary, this thesis illustrates a data-driven approach to understanding and defending against attacks and abuse in online communities. Our measurements have revealed new insights about how attackers are evolving to bypass existing security defenses today. Inaddition, our data-driven systems provide new solutions for online services to gain a deep understanding of their users, and defend them from emerging attacks and abuse

    Just One Click: The Reality of Internet Retail Contracting

    Get PDF
    This Essay explores the enforceability and presence of pro-seller contract terms in internet retail contracts. Analyzing case law on internet contract enforceability and a survey of 500 firms’ websites, it demonstrates that even the enforceability of many internet contracts is questionable. It then presents new data that suggest that the prevalence of pro-seller contract terms is far less than usually assumed. It suggests that the benefit of making these terms enforceable is outweighed by the loss of user friendliness required for the necessary interface changes. Finally, it uses fresh statistical analyses to determine what relationship, if any, exists between enforceability, pro-seller contract terms, business size, product channel, or product type. Generally, it concludes that the contract literature has likely overestimated the benefit of the pro-seller contract terms and underestimated the role internet contract terms play in informal enforcement; unenforceable terms may still serve as guideposts to dispute resolution

    Compréhension de contenus visuels par analyse conjointe du contenu et des usages

    Get PDF
    Dans cette thèse, nous traitons de la compréhension de contenus visuels, qu’il s’agisse d’images, de vidéos ou encore de contenus 3D. On entend par compréhension la capacité à inférer des informations sémantiques sur le contenu visuel. L’objectif de ce travail est d’étudier des méthodes combinant deux approches : 1) l’analyse automatique des contenus et 2) l’analyse des interactions liées à l’utilisation de ces contenus (analyse des usages, en plus bref). Dans un premier temps, nous étudions l’état de l’art issu des communautés de la vision par ordinateur et du multimédia. Il y a 20 ans, l’approche dominante visait une compréhension complètement automatique des images. Cette approche laisse aujourd’hui plus de place à différentes formes d’interventions humaines. Ces dernières peuvent se traduire par la constitution d’une base d’apprentissage annotée, par la résolution interactive de problèmes (par exemple de détection ou de segmentation) ou encore par la collecte d’informations implicites issues des usages du contenu. Il existe des liens riches et complexes entre supervision humaine d’algorithmes automatiques et adaptation des contributions humaines via la mise en œuvre d’algorithmes automatiques. Ces liens sont à l’origine de questions de recherche modernes : comment motiver des intervenants humains ? Comment concevoir des scénarii interactifs pour lesquels les interactions contribuent à comprendre le contenu manipulé ? Comment vérifier la qualité des traces collectées ? Comment agréger les données d’usage ? Comment fusionner les données d’usage avec celles, plus classiques, issues d’une analyse automatique ? Notre revue de la littérature aborde ces questions et permet de positionner les contributions de cette thèse. Celles-ci s’articulent en deux grandes parties. La première partie de nos travaux revisite la détection de régions importantes ou saillantes au travers de retours implicites d’utilisateurs qui visualisent ou acquièrent des con- tenus visuels. En 2D d’abord, plusieurs interfaces de vidéos interactives (en particulier la vidéo zoomable) sont conçues pour coordonner des analyses basées sur le contenu avec celles basées sur l’usage. On généralise ces résultats en 3D avec l’introduction d’un nouveau détecteur de régions saillantes déduit de la capture simultanée de vidéos de la même performance artistique publique (spectacles de danse, de chant etc.) par de nombreux utilisateurs. La seconde contribution de notre travail vise une compréhension sémantique d’images fixes. Nous exploitons les données récoltées à travers un jeu, Ask’nSeek, que nous avons créé. Les interactions élémentaires (comme les clics) et les données textuelles saisies par les joueurs sont, comme précédemment, rapprochées d’analyses automatiques des images. Nous montrons en particulier l’intérêt d’interactions révélatrices des relations spatiales entre différents objets détectables dans une même scène. Après la détection des objets d’intérêt dans une scène, nous abordons aussi le problème, plus ambitieux, de la segmentation. ABSTRACT : This thesis focuses on the problem of understanding visual contents, which can be images, videos or 3D contents. Understanding means that we aim at inferring semantic information about the visual content. The goal of our work is to study methods that combine two types of approaches: 1) automatic content analysis and 2) an analysis of how humans interact with the content (in other words, usage analysis). We start by reviewing the state of the art from both Computer Vision and Multimedia communities. Twenty years ago, the main approach was aiming at a fully automatic understanding of images. This approach today gives way to different forms of human intervention, whether it is through the constitution of annotated datasets, or by solving problems interactively (e.g. detection or segmentation), or by the implicit collection of information gathered from content usages. These different types of human intervention are at the heart of modern research questions: how to motivate human contributors? How to design interactive scenarii that will generate interactions that contribute to content understanding? How to check or ensure the quality of human contributions? How to aggregate human contributions? How to fuse inputs obtained from usage analysis with traditional outputs from content analysis? Our literature review addresses these questions and allows us to position the contributions of this thesis. In our first set of contributions we revisit the detection of important (or salient) regions through implicit feedback from users that either consume or produce visual contents. In 2D, we develop several interfaces of interactive video (e.g. zoomable video) in order to coordinate content analysis and usage analysis. We also generalize these results to 3D by introducing a new detector of salient regions that builds upon simultaneous video recordings of the same public artistic performance (dance show, chant, etc.) by multiple users. The second contribution of our work aims at a semantic understanding of fixed images. With this goal in mind, we use data gathered through a game, Ask’nSeek, that we created. Elementary interactions (such as clicks) together with textual input data from players are, as before, mixed with automatic analysis of images. In particular, we show the usefulness of interactions that help revealing spatial relations between different objects in a scene. After studying the problem of detecting objects on a scene, we also adress the more ambitious problem of segmentation
    • …
    corecore