154 research outputs found

    BPRS: Belief Propagation Based Iterative Recommender System

    Full text link
    In this paper we introduce the first application of the Belief Propagation (BP) algorithm in the design of recommender systems. We formulate the recommendation problem as an inference problem and aim to compute the marginal probability distributions of the variables which represent the ratings to be predicted. However, computing these marginal probability functions is computationally prohibitive for large-scale systems. Therefore, we utilize the BP algorithm to efficiently compute these functions. Recommendations for each active user are then iteratively computed by probabilistic message passing. As opposed to the previous recommender algorithms, BPRS does not require solving the recommendation problem for all the users if it wishes to update the recommendations for only a single active. Further, BPRS computes the recommendations for each user with linear complexity and without requiring a training period. Via computer simulations (using the 100K MovieLens dataset), we verify that BPRS iteratively reduces the error in the predicted ratings of the users until it converges. Finally, we confirm that BPRS is comparable to the state of art methods such as Correlation-based neighborhood model (CorNgbr) and Singular Value Decomposition (SVD) in terms of rating and precision accuracy. Therefore, we believe that the BP-based recommendation algorithm is a new promising approach which offers a significant advantage on scalability while providing competitive accuracy for the recommender systems

    Shilling Black-box Review-based Recommender Systems through Fake Review Generation

    Full text link
    Review-Based Recommender Systems (RBRS) have attracted increasing research interest due to their ability to alleviate well-known cold-start problems. RBRS utilizes reviews to construct the user and items representations. However, in this paper, we argue that such a reliance on reviews may instead expose systems to the risk of being shilled. To explore this possibility, in this paper, we propose the first generation-based model for shilling attacks against RBRSs. Specifically, we learn a fake review generator through reinforcement learning, which maliciously promotes items by forcing prediction shifts after adding generated reviews to the system. By introducing the auxiliary rewards to increase text fluency and diversity with the aid of pre-trained language models and aspect predictors, the generated reviews can be effective for shilling with high fidelity. Experimental results demonstrate that the proposed framework can successfully attack three different kinds of RBRSs on the Amazon corpus with three domains and Yelp corpus. Furthermore, human studies also show that the generated reviews are fluent and informative. Finally, equipped with Attack Review Generators (ARGs), RBRSs with adversarial training are much more robust to malicious reviews

    A robust reputation-based location-privacy recommender system using opportunistic networks

    Get PDF
    Location-sharing services have grown in use commensurately with the increasing popularity of smart phones. As location data can be sensitive, it is important to preserve people’s privacy while using such services, and so location-privacy recommender systems have been proposed to help people configure their privacy settings.These recommenders collect and store people’s data in a centralised system, but these themselves can introduce new privacy threats and concerns.In this paper, we propose a decentralised location-privacy recommender system based on opportunistic networks. We evaluate our system using real-world location-privacy traces, and introduce a reputation scheme based on encounter frequencies to mitigate the potential effects of shilling attacks by malicious users. Experimental results show that, after receiving adequate data, our decentralised recommender system’s performance is close to the performance of traditional centralised recommender systems (3% difference in accuracy and 1% difference in leaks). Meanwhile, our reputation scheme significantly mitigates the effect of malicious users’input (from 55% to 8% success) and makes it increasingly expensive to conduct such attacks.Postprin

    Multi-Armed-Bandit-based Shilling Attack on Collaborative Filtering Recommender Systems

    Get PDF
    Collaborative Filtering (CF) is a popular recommendation system that makes recommendations based on similar users' preferences. Though it is widely used, CF is prone to Shilling/Profile Injection attacks, where fake profiles are injected into the CF system to alter its outcome. Most of the existing shilling attacks do not work on online systems and cannot be efficiently implemented in real-world applications. In this paper, we introduce an efficient Multi-Armed-Bandit-based reinforcement learning method to practically execute online shilling attacks. Our method works by reducing the uncertainty associated with the item selection process and finds the most optimal items to enhance attack reach. Such practical online attacks open new avenues for research in building more robust recommender systems. We treat the recommender system as a black box, making our method effective irrespective of the type of CF used. Finally, we also experimentally test our approach against popular state-of-the-art shilling attacks

    Cross-systems Personalisierung

    Get PDF
    The World Wide Web provides access to a wealth of information and services to a huge and heterogeneous user population on a global scale. One important and successful design mechanism in dealing with this diversity of users is to personalize Web sites and services, i.e. to customize system content, characteristics, or appearance with respect to a specific user. Each system independently builds up user profiles and uses this information to personalize the service offering. Such isolated approaches have two major drawbacks: firstly, investments of users in personalizing a system either through explicit provision of information or through long and regular use are not transferable to other systems. Secondly, users have little or no control over the information that defines their profile, since user data are deeply buried in personalization engines running on the server side. Cross system personalization (CSP) (Mehta, Niederee, & Stewart, 2005) allows for sharing information across different information systems in a user-centric way and can overcome the aforementioned problems. Information about users, which is originally scattered across multiple systems, is combined to obtain maximum leverage and reuse of information. Our initial approaches to cross system personalization relied on each user having a unified profile which different systems can understand. The unified profile contains facets modeling aspects of a multidimensional user which is stored inside a "Context Passport" that the user carries along in his/her journey across information space. The user’s Context Passport is presented to a system, which can then understand the context in which the user wants to use the system. The basis of ’understanding’ in this approach is of a semantic nature, i.e. the semantics of the facets and dimensions of the unified profile are known, so that the latter can be aligned with the profiles maintained internally at a specific site. The results of the personalization process are then transfered back to the user’s Context Passport via a protocol understood by both parties. The main challenge in this approach is to establish some common and globally accepted vocabulary and to create a standard every system will comply with. Machine Learning techniques provide an alternative approach to enable CSP without the need of accepted semantic standards or ontologies. The key idea is that one can try to learn dependencies between profiles maintained within one system and profiles maintained within a second system based on data provided by users who use both systems and who are willing to share their profiles across systems – which we assume is in the interest of the user. Here, instead of requiring a common semantic framework, it is only required that a sufficient number of users cross between systems and that there is enough regularity among users that one can learn within a user population, a fact that is commonly exploited in collaborative filtering. In this thesis, we aim to provide a principled approach towards achieving cross system personalization. We describe both semantic and learning approaches, with a stronger emphasis on the learning approach. We also investigate the privacy and scalability aspects of CSP and provide solutions to these problems. Finally, we also explore in detail the aspect of robustness in recommender systems. We motivate several approaches for robustifying collaborative filtering and provide the best performing algorithm for detecting malicious attacks reported so far.Die Personalisierung von Software Systemen ist von stetig zunehmender Bedeutung, insbesondere im Zusammenhang mit Web-Applikationen wie Suchmaschinen, Community-Portalen oder Electronic Commerce Sites, die große, stark diversifizierte Nutzergruppen ansprechen. Da explizite Personalisierung typischerweise mit einem erheblichen zeitlichem Aufwand für den Nutzer verbunden ist, greift man in vielen Applikationen auf implizite Techniken zur automatischen Personalisierung zurück, insbesondere auf Empfehlungssysteme (Recommender Systems), die typischerweise Methoden wie das Collaborative oder Social Filtering verwenden. Während diese Verfahren keine explizite Erzeugung von Benutzerprofilen mittels Beantwortung von Fragen und explizitem Feedback erfordern, ist die Qualität der impliziten Personalisierung jedoch stark vom verfügbaren Datenvolumen, etwa Transaktions-, Query- oder Click-Logs, abhängig. Ist in diesem Sinne von einem Nutzer wenig bekannt, so können auch keine zuverlässigen persönlichen Anpassungen oder Empfehlungen vorgenommen werden. Die vorgelegte Dissertation behandelt die Frage, wie Personalisierung über Systemgrenzen hinweg („cross system“) ermöglicht und unterstützt werden kann, wobei hauptsächlich implizite Personalisierungstechniken, aber eingeschränkt auch explizite Methodiken wie der semantische Context Passport diskutiert werden. Damit behandelt die Dissertation eine wichtige Forschungs-frage von hoher praktischer Relevanz, die in der neueren wissenschaftlichen Literatur zu diesem Thema nur recht unvollständig und unbefriedigend gelöst wurde. Automatische Empfehlungssysteme unter Verwendung von Techniken des Social Filtering sind etwas seit Mitte der 90er Jahre mit dem Aufkommen der ersten E-Commerce Welle popularisiert orden, insbesondere durch Projekte wie Information Tapistery, Grouplens und Firefly. In den späten 90er Jahren und Anfang dieses Jahrzehnts lag der Hauptfokus der Forschungsliteratur dann auf verbesserten statistischen Verfahren und fortgeschrittenen Inferenz-Methodiken, mit deren Hilfe die impliziten Beobachtungen auf konkrete Anpassungs- oder Empfehlungsaktionen abgebildet werden können. In den letzten Jahren sind vor allem Fragen in den Vordergrund gerückt, wie Personalisierungssysteme besser auf die praktischen Anforderungen bestimmter Applikationen angepasst werden können, wobei es insbesondere um eine geeignete Anpassung und Erweiterung existierender Techniken geht. In diesem Rahmen stellt sich die vorgelegte Arbeit

    Privacy-Preserving Crowdsourcing-Based Recommender Systems for E-Commerce & Health Services

    Get PDF
    En l’actualitat, els sistemes de recomanació han esdevingut un mecanisme fonamental per proporcionar als usuaris informació útil i filtrada, amb l’objectiu d’optimitzar la presa de decisions, com per exemple, en el camp del comerç electrònic. La quantitat de dades existent a Internet és tan extensa que els usuaris necessiten sistemes automàtics per ajudar-los a distingir entre informació valuosa i soroll. No obstant, sistemes de recomanació com el Filtratge Col·laboratiu tenen diverses limitacions, com ara la manca de resposta i la privadesa. Una part important d'aquesta tesi es dedica al desenvolupament de metodologies per fer front a aquestes limitacions. A més de les aportacions anteriors, en aquesta tesi també ens centrem en el procés d'urbanització que s'està produint a tot el món i en la necessitat de crear ciutats més sostenibles i habitables. En aquest context, ens proposem solucions de salut intel·ligent (s-health) i metodologies eficients de caracterització de canals sense fils, per tal de proporcionar assistència sanitària sostenible en el context de les ciutats intel·ligents.En la actualidad, los sistemas de recomendación se han convertido en una herramienta indispensable para proporcionar a los usuarios información útil y filtrada, con el objetivo de optimizar la toma de decisiones en una gran variedad de contextos. La cantidad de datos existente en Internet es tan extensa que los usuarios necesitan sistemas automáticos para ayudarles a distinguir entre información valiosa y ruido. Sin embargo, sistemas de recomendación como el Filtrado Colaborativo tienen varias limitaciones, tales como la falta de respuesta y la privacidad. Una parte importante de esta tesis se dedica al desarrollo de metodologías para hacer frente a esas limitaciones. Además de las aportaciones anteriores, en esta tesis también nos centramos en el proceso de urbanización que está teniendo lugar en todo el mundo y en la necesidad de crear ciudades más sostenibles y habitables. En este contexto, proponemos soluciones de salud inteligente (s-health) y metodologías eficientes de caracterización de canales inalámbricos, con el fin de proporcionar asistencia sanitaria sostenible en el contexto de las ciudades inteligentes.Our society lives an age where the eagerness for information has resulted in problems such as infobesity, especially after the arrival of Web 2.0. In this context, automatic systems such as recommenders are increasing their relevance, since they help to distinguish noise from useful information. However, recommender systems such as Collaborative Filtering have several limitations such as non-response and privacy. An important part of this thesis is devoted to the development of methodologies to cope with these limitations. In addition to the previously stated research topics, in this dissertation we also focus in the worldwide process of urbanisation that is taking place and the need for more sustainable and liveable cities. In this context, we focus on smart health solutions and efficient wireless channel characterisation methodologies, in order to provide sustainable healthcare in the context of smart cities

    Probabilistic Modeling and Inference for Obfuscated Network Attack Sequences

    Get PDF
    Prevalent computing devices with networking capabilities have become critical network infrastructure for government, industry, academia and every-day life. As their value rises, the motivation driving network attacks on this infrastructure has shifted from the pursuit of notoriety to the pursuit of profit or political gains, leading to network attack on various scales. Facing diverse network attack strategies and overwhelming alters, much work has been devoted to correlate observed malicious events to pre-defined scenarios, attempting to deduce the attack plans based on expert models of how network attacks may transpire. We started the exploration of characterizing network attacks by investigating how temporal and spatial features of attack sequence can be used to describe different types of attack sources in real data set. Attack sequence models were built from real data set to describe different attack strategies. Based on the probabilistic attack sequence model, attack predictions were made to actively predict next possible actions. Experiments through attack predictions have revealed that sophisticated attackers can employ a number of obfuscation techniques to confuse the alert correlation engine or classifier. Unfortunately, most exiting work treats attack obfuscations by developing ad-hoc fixes to specific obfuscation technique. To this end, we developed an attack modeling framework that enables a systematical analysis of obfuscations. The proposed framework represents network attack strategies as general finite order Markov models and integrates it with different attack obfuscation models to form probabilistic graphical model models. A set of algorithms is developed to inference the network attack strategies given the models and the observed sequences, which are likely to be obfuscated. The algorithms enable an efficient analysis of the impact of different obfuscation techniques and attack strategies, by determining the expected classification accuracy of the obfuscated sequences. The algorithms are developed by integrating the recursion concept in dynamic programming and the Monte-Carlo method. The primary contributions of this work include the development of the formal framework and the algorithms to evaluate the impact of attack obfuscations. Several knowledge-driven attack obfuscation models are developed and analyzed to demonstrate the impact of different types of commonly used obfuscation techniques. The framework and algorithms developed in this work can also be applied to other contexts beyond network security. Any behavior sequences that might suffer from noise and require matching to pre-defined models can use this work to recover the most likely original sequence or evaluate quantitatively the expected classification accuracy one can achieve to separate the sequences
    corecore