399 research outputs found

    Systemization of Pluggable Transports for Censorship Resistance

    Full text link
    An increasing number of countries implement Internet censorship at different scales and for a variety of reasons. In particular, the link between the censored client and entry point to the uncensored network is a frequent target of censorship due to the ease with which a nation-state censor can control it. A number of censorship resistance systems have been developed thus far to help circumvent blocking on this link, which we refer to as link circumvention systems (LCs). The variety and profusion of attack vectors available to a censor has led to an arms race, leading to a dramatic speed of evolution of LCs. Despite their inherent complexity and the breadth of work in this area, there is no systematic way to evaluate link circumvention systems and compare them against each other. In this paper, we (i) sketch an attack model to comprehensively explore a censor's capabilities, (ii) present an abstract model of a LC, a system that helps a censored client communicate with a server over the Internet while resisting censorship, (iii) describe an evaluation stack that underscores a layered approach to evaluate LCs, and (iv) systemize and evaluate existing censorship resistance systems that provide link circumvention. We highlight open challenges in the evaluation and development of LCs and discuss possible mitigations.Comment: Content from this paper was published in Proceedings on Privacy Enhancing Technologies (PoPETS), Volume 2016, Issue 4 (July 2016) as "SoK: Making Sense of Censorship Resistance Systems" by Sheharbano Khattak, Tariq Elahi, Laurent Simon, Colleen M. Swanson, Steven J. Murdoch and Ian Goldberg (DOI 10.1515/popets-2016-0028

    A survey of RFID privacy approaches

    Get PDF
    A bewildering number of proposals have offered solutions to the privacy problems inherent in RFID communication. This article tries to give an overview of the currently discussed approaches and their attribute

    Efficient, Effective, and Realistic Website Fingerprinting Mitigation

    Get PDF
    Website fingerprinting attacks have been shown to be able to predict the website visited even if the network connection is encrypted and anonymized. These attacks have achieved accuracies as high as 92%. Mitigations to these attacks are using cover/decoy network traffic to add noise, padding to ensure all the network packets are the same size, and introducing network delays to confuse an adversary. Although these mitigations have been shown to be effective, reducing the accuracy to 10%, the overhead is high. The latency overhead is above 100% and the bandwidth overhead is at least 30%. We introduce a new realistic cover traffic algorithm, based on a user’s previous network traffic, to mitigate website fingerprinting attacks. In simulations, our algorithm reduces the accuracy of attacks to 14% with zero latency overhead and about 20% bandwidth overhead. In real-world experiments, our algorithms reduces the accuracy of attacks to 16% with only 20% bandwidth overhead

    Privacy in an Ambient World

    Get PDF
    Privacy is a prime concern in today's information society. To protect\ud the privacy of individuals, enterprises must follow certain privacy practices, while\ud collecting or processing personal data. In this chapter we look at the setting where an\ud enterprise collects private data on its website, processes it inside the enterprise and\ud shares it with partner enterprises. In particular, we analyse three different privacy\ud systems that can be used in the different stages of this lifecycle. One of them is the\ud Audit Logic, recently introduced, which can be used to keep data private when it\ud travels across enterprise boundaries. We conclude with an analysis of the features\ud and shortcomings of these systems

    On the Privacy Practices of Just Plain Sites

    Full text link
    In addition to visiting high profile sites such as Facebook and Google, web users often visit more modest sites, such as those operated by bloggers, or by local organizations such as schools. Such sites, which we call "Just Plain Sites" (JPSs) are likely to inadvertently represent greater privacy risks than high profile sites by virtue of being unable to afford privacy expertise. To assess the prevalence of the privacy risks to which JPSs may inadvertently be exposing their visitors, we analyzed a number of easily observed privacy practices of such sites. We found that many JPSs collect a great deal of information from their visitors, share a great deal of information about their visitors with third parties, permit a great deal of tracking of their visitors, and use deprecated or unsafe security practices. Our goal in this work is not to scold JPS operators, but to raise awareness of these facts among both JPS operators and visitors, possibly encouraging the operators of such sites to take greater care in their implementations, and visitors to take greater care in how, when, and what they share.Comment: 10 pages, 7 figures, 6 tables, 5 authors, and a partridge in a pear tre

    Nonadaptive Mastermind Algorithms for String and Vector Databases, with Case Studies

    Full text link
    In this paper, we study sparsity-exploiting Mastermind algorithms for attacking the privacy of an entire database of character strings or vectors, such as DNA strings, movie ratings, or social network friendship data. Based on reductions to nonadaptive group testing, our methods are able to take advantage of minimal amounts of privacy leakage, such as contained in a single bit that indicates if two people in a medical database have any common genetic mutations, or if two people have any common friends in an online social network. We analyze our Mastermind attack algorithms using theoretical characterizations that provide sublinear bounds on the number of queries needed to clone the database, as well as experimental tests on genomic information, collaborative filtering data, and online social networks. By taking advantage of the generally sparse nature of these real-world databases and modulating a parameter that controls query sparsity, we demonstrate that relatively few nonadaptive queries are needed to recover a large majority of each database

    ESPOON: Enforcing Encrypted Security Policies in Outsourced Environments

    Get PDF
    The enforcement of security policies in outsourced environments is still an open challenge for policy-based systems. On the one hand, taking the appropriate security decision requires access to the policies. However, if such access is allowed in an untrusted environment then confidential information might be leaked by the policies. Current solutions are based on cryptographic operations that embed security policies with the security mechanism. Therefore, the enforcement of such policies is performed by allowing the authorised parties to access the appropriate keys. We believe that such solutions are far too rigid because they strictly intertwine authorisation policies with the enforcing mechanism. In this paper, we want to address the issue of enforcing security policies in an untrusted environment while protecting the policy confidentiality. Our solution ESPOON is aiming at providing a clear separation between security policies and the enforcement mechanism. However, the enforcement mechanism should learn as less as possible about both the policies and the requester attributes.Comment: The final version of this paper has been published at ARES 201

    Machine-Readable Privacy Certificates for Services

    Full text link
    Privacy-aware processing of personal data on the web of services requires managing a number of issues arising both from the technical and the legal domain. Several approaches have been proposed to matching privacy requirements (on the clients side) and privacy guarantees (on the service provider side). Still, the assurance of effective data protection (when possible) relies on substantial human effort and exposes organizations to significant (non-)compliance risks. In this paper we put forward the idea that a privacy certification scheme producing and managing machine-readable artifacts in the form of privacy certificates can play an important role towards the solution of this problem. Digital privacy certificates represent the reasons why a privacy property holds for a service and describe the privacy measures supporting it. Also, privacy certificates can be used to automatically select services whose certificates match the client policies (privacy requirements). Our proposal relies on an evolution of the conceptual model developed in the Assert4Soa project and on a certificate format specifically tailored to represent privacy properties. To validate our approach, we present a worked-out instance showing how privacy property Retention-based unlinkability can be certified for a banking financial service.Comment: 20 pages, 6 figure
    corecore