5 research outputs found

    Towards a Conceptual Model and Reasoning Structure for Insider Threat Detection

    Get PDF
    The insider threat faced by corporations and governments today is a real and significant problem, and one that has become increasingly difficult to combat as the years have progressed. From a technology standpoint, traditional protective measures such as intrusion detection systems are largely inadequate given the nature of the ‘insider’ and their legitimate access to prized organisational data and assets. As a result, it is necessary to research and develop more sophisticated approaches for the accurate recognition, detection and response to insider threats. One way in which this may be achieved is by understanding the complete picture of why an insider may initiate an attack, and the indicative elements along the attack chain. This includes the use of behavioural and psychological observations about a potential malicious insider in addition to technological monitoring and profiling techniques. In this paper, we propose a framework for modelling the insider-threat problem that goes beyond traditional technological observations and incorporates a more complete view of insider threats, common precursors, and human actions and behaviours. We present a conceptual model for insider threat and a reasoning structure that allows an analyst to make or draw hypotheses regarding a potential insider threat based on measurable states from real-world observations

    Entropy-based privacy against profiling of user mobility

    Get PDF
    Location-based services (LBSs) flood mobile phones nowadays, but their use poses an evident privacy risk. The locations accompanying the LBS queries can be exploited by the LBS provider to build the user profile of visited locations, which might disclose sensitive data, such as work or home locations. The classic concept of entropy is widely used to evaluate privacy in these scenarios, where the information is represented as a sequence of independent samples of categorized data. However, since the LBS queries might be sent very frequently, location profiles can be improved by adding temporal dependencies, thus becoming mobility profiles, where location samples are not independent anymore and might disclose the user's mobility patterns. Since the time dimension is factored in, the classic entropy concept falls short of evaluating the real privacy level, which depends also on the time component. Therefore, we propose to extend the entropy-based privacy metric to the use of the entropy rate to evaluate mobility profiles. Then, two perturbative mechanisms are considered to preserve locations and mobility profiles under gradual utility constraints. We further use the proposed privacy metric and compare it to classic ones to evaluate both synthetic and real mobility profiles when the perturbative methods proposed are applied. The results prove the usefulness of the proposed metric for mobility profiles and the need for tailoring the perturbative methods to the features of mobility profiles in order to improve privacy without completely loosing utility.This work is partially supported by the Spanish Ministry of Science and Innovation through the CONSEQUENCE (TEC2010-20572-C02-01/02) and EMRISCO (TEC2013-47665-C4-4-R) projects.The work of Das was partially supported by NSF Grants IIS-1404673, CNS-1355505, CNS-1404677 and DGE-1433659. Part of the work by Rodriguez-Carrion was conducted while she was visiting the Computer Science Department at Missouri University of Science and Technology in 2013–2014

    The Proceedings of the European Conference on Social Media ECSM 2014 University of Brighton

    Get PDF

    Privacy protection of user profiles in personalized information systems

    Get PDF
    In recent times we are witnessing the emergence of a wide variety of information systems that tailor the information-exchange functionality to meet the specific interests of their users. Most of these personalized information systems capitalize on, or lend themselves to, the construction of profiles, either directly declared by a user, or inferred from past activity. The ability of these systems to profile users is therefore what enables such intelligent functionality, but at the same time, it is the source of serious privacy concerns. Although there exists a broad range of privacy-enhancing technologies aimed to mitigate many of those concerns, the fact is that their use is far from being widespread. The main reason is that there is a certain ambiguity about these technologies and their effectiveness in terms of privacy protection. Besides, since these technologies normally come at the expense of system functionality and utility, it is challenging to assess whether the gain in privacy compensates for the costs in utility. Assessing the privacy provided by a privacy-enhancing technology is thus crucial to determine its overall benefit, to compare its effectiveness with other technologies, and ultimately to optimize it in terms of the privacy-utility trade-off posed. Considerable effort has consequently been devoted to investigating both privacy and utility metrics. However, most of these metrics are specific to concrete systems and adversary models, and hence are difficult to generalize or translate to other contexts. Moreover, in applications involving user profiles, there are a few proposals for the evaluation of privacy, and those existing are not appropriately justified or fail to justify the choice. The first part of this thesis approaches the fundamental problem of quantifying user privacy. Firstly, we present a theoretical framework for privacy-preserving systems, endowed with a unifying view of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. Our theoretical analysis shows that numerous privacy metrics emerging from a broad spectrum of applications are bijectively related to this estimation error, which permits interpreting and comparing these metrics under a common perspective. Secondly, we tackle the issue of measuring privacy in the enthralling application of personalized information systems. Specifically, we propose two information-theoretic quantities as measures of the privacy of user profiles, and justify these metrics by building on Jaynes' rationale behind entropy-maximization methods and fundamental results from the method of types and hypothesis testing. Equipped with quantifiable measures of privacy and utility, the second part of this thesis investigates privacy-enhancing, data-perturbative mechanisms and architectures for two important classes of personalized information systems. In particular, we study the elimination of tags in semantic-Web applications, and the combination of the forgery and the suppression of ratings in personalized recommendation systems. We design such mechanisms to achieve the optimal privacy-utility trade-off, in the sense of maximizing privacy for a desired utility, or vice versa. We proceed in a systematic fashion by drawing upon the methodology of multiobjective optimization. Our theoretical analysis finds a closed-form solution to the problem of optimal tag suppression, and to the problem of optimal forgery and suppression of ratings. In addition, we provide an extensive theoretical characterization of the trade-off between the contrasting aspects of privacy and utility. Experimental results in real-world applications show the effectiveness of our mechanisms in terms of privacy protection, system functionality and data utility
    corecore