1,378 research outputs found

    Systemization of Pluggable Transports for Censorship Resistance

    Full text link
    An increasing number of countries implement Internet censorship at different scales and for a variety of reasons. In particular, the link between the censored client and entry point to the uncensored network is a frequent target of censorship due to the ease with which a nation-state censor can control it. A number of censorship resistance systems have been developed thus far to help circumvent blocking on this link, which we refer to as link circumvention systems (LCs). The variety and profusion of attack vectors available to a censor has led to an arms race, leading to a dramatic speed of evolution of LCs. Despite their inherent complexity and the breadth of work in this area, there is no systematic way to evaluate link circumvention systems and compare them against each other. In this paper, we (i) sketch an attack model to comprehensively explore a censor's capabilities, (ii) present an abstract model of a LC, a system that helps a censored client communicate with a server over the Internet while resisting censorship, (iii) describe an evaluation stack that underscores a layered approach to evaluate LCs, and (iv) systemize and evaluate existing censorship resistance systems that provide link circumvention. We highlight open challenges in the evaluation and development of LCs and discuss possible mitigations.Comment: Content from this paper was published in Proceedings on Privacy Enhancing Technologies (PoPETS), Volume 2016, Issue 4 (July 2016) as "SoK: Making Sense of Censorship Resistance Systems" by Sheharbano Khattak, Tariq Elahi, Laurent Simon, Colleen M. Swanson, Steven J. Murdoch and Ian Goldberg (DOI 10.1515/popets-2016-0028

    Seeking Anonymity in an Internet Panopticon

    Full text link
    Obtaining and maintaining anonymity on the Internet is challenging. The state of the art in deployed tools, such as Tor, uses onion routing (OR) to relay encrypted connections on a detour passing through randomly chosen relays scattered around the Internet. Unfortunately, OR is known to be vulnerable at least in principle to several classes of attacks for which no solution is known or believed to be forthcoming soon. Current approaches to anonymity also appear unable to offer accurate, principled measurement of the level or quality of anonymity a user might obtain. Toward this end, we offer a high-level view of the Dissent project, the first systematic effort to build a practical anonymity system based purely on foundations that offer measurable and formally provable anonymity properties. Dissent builds on two key pre-existing primitives - verifiable shuffles and dining cryptographers - but for the first time shows how to scale such techniques to offer measurable anonymity guarantees to thousands of participants. Further, Dissent represents the first anonymity system designed from the ground up to incorporate some systematic countermeasure for each of the major classes of known vulnerabilities in existing approaches, including global traffic analysis, active attacks, and intersection attacks. Finally, because no anonymity protocol alone can address risks such as software exploits or accidental self-identification, we introduce WiNon, an experimental operating system architecture to harden the uses of anonymity tools such as Tor and Dissent against such attacks.Comment: 8 pages, 10 figure

    TARANET: Traffic-Analysis Resistant Anonymity at the NETwork layer

    Full text link
    Modern low-latency anonymity systems, no matter whether constructed as an overlay or implemented at the network layer, offer limited security guarantees against traffic analysis. On the other hand, high-latency anonymity systems offer strong security guarantees at the cost of computational overhead and long delays, which are excessive for interactive applications. We propose TARANET, an anonymity system that implements protection against traffic analysis at the network layer, and limits the incurred latency and overhead. In TARANET's setup phase, traffic analysis is thwarted by mixing. In the data transmission phase, end hosts and ASes coordinate to shape traffic into constant-rate transmission using packet splitting. Our prototype implementation shows that TARANET can forward anonymous traffic at over 50~Gbps using commodity hardware

    Towards Provably Invisible Network Flow Fingerprints

    Full text link
    Network traffic analysis reveals important information even when messages are encrypted. We consider active traffic analysis via flow fingerprinting by invisibly embedding information into packet timings of flows. In particular, assume Alice wishes to embed fingerprints into flows of a set of network input links, whose packet timings are modeled by Poisson processes, without being detected by a watchful adversary Willie. Bob, who receives the set of fingerprinted flows after they pass through the network modeled as a collection of independent and parallel M/M/1M/M/1 queues, wishes to extract Alice's embedded fingerprints to infer the connection between input and output links of the network. We consider two scenarios: 1) Alice embeds fingerprints in all of the flows; 2) Alice embeds fingerprints in each flow independently with probability pp. Assuming that the flow rates are equal, we calculate the maximum number of flows in which Alice can invisibly embed fingerprints while having those fingerprints successfully decoded by Bob. Then, we extend the construction and analysis to the case where flow rates are distinct, and discuss the extension of the network model

    The Flow Fingerprinting Game

    Full text link
    Linking two network flows that have the same source is essential in intrusion detection or in tracing anonymous connections. To improve the performance of this process, the flow can be modified (fingerprinted) to make it more distinguishable. However, an adversary located in the middle can modify the flow to impair the correlation by delaying the packets or introducing dummy traffic. We introduce a game-theoretic framework for this problem, that is used to derive the Nash Equilibrium. As obtaining the optimal adversary delays distribution is intractable, some approximations are done. We study the concrete example where these delays follow a truncated Gaussian distribution. We also compare the optimal strategies with other fingerprinting schemes. The results are useful for understanding the limits of flow correlation based on packet timings under an active attacker.Comment: Workshop on Information Forensics and Securit

    Neyman-Pearson Decision in Traffic Analysis

    Get PDF
    The increase of encrypted traffic on the Internet may become a problem for network-security applications such as intrusion-detection systems or interfere with forensic investigations. This fact has increased the awareness for traffic analysis, i.e., inferring information from communication patterns instead of its content. Deciding correctly that a known network flow is either the same or part of an observed one can be extremely useful for several network-security applications such as intrusion detection and tracing anonymous connections. In many cases, the flows of interest are relayed through many nodes that reencrypt the flow, making traffic analysis the only possible solution. There exist two well-known techniques to solve this problem: passive traffic analysis and flow watermarking. The former is undetectable but in general has a much worse performance than watermarking, whereas the latter can be detected and modified in such a way that the watermark is destroyed. In the first part of this dissertation we design techniques where the traffic analyst (TA) is one end of an anonymous communication and wants to deanonymize the other host, under this premise that the arrival time of the TA\u27s packets/requests can be predicted with high confidence. This, together with the use of an optimal detector, based on Neyman-Pearson lemma, allow the TA deanonymize the other host with high confidence even with short flows. We start by studying the forensic problem of leaving identifiable traces on the log of a Tor\u27s hidden service, in this case the used predictor comes in the HTTP header. Afterwards, we propose two different methods for locating Tor hidden services, the first one is based on the arrival time of the request cell and the second one uses the number of cells in certain time intervals. In both of these methods, the predictor is based on the round-trip time and in some cases in the position inside its burst, hence this method does not need the TA to have access to the decrypted flow. The second part of this dissertation deals with scenarios where an accurate predictor is not feasible for the TA. This traffic analysis technique is based on correlating the inter-packet delays (IPDs) using a Neyman-Pearson detector. Our method can be used as a passive analysis or as a watermarking technique. This algorithm is first made robust against adversary models that add chaff traffic, split the flows or add random delays. Afterwards, we study this scenario from a game-theoretic point of view, analyzing two different games: the first deals with the identification of independent flows, while the second one decides whether a flow has been watermarked/fingerprinted or not

    Web Tracking: Mechanisms, Implications, and Defenses

    Get PDF
    This articles surveys the existing literature on the methods currently used by web services to track the user online as well as their purposes, implications, and possible user's defenses. A significant majority of reviewed articles and web resources are from years 2012-2014. Privacy seems to be the Achilles' heel of today's web. Web services make continuous efforts to obtain as much information as they can about the things we search, the sites we visit, the people with who we contact, and the products we buy. Tracking is usually performed for commercial purposes. We present 5 main groups of methods used for user tracking, which are based on sessions, client storage, client cache, fingerprinting, or yet other approaches. A special focus is placed on mechanisms that use web caches, operational caches, and fingerprinting, as they are usually very rich in terms of using various creative methodologies. We also show how the users can be identified on the web and associated with their real names, e-mail addresses, phone numbers, or even street addresses. We show why tracking is being used and its possible implications for the users (price discrimination, assessing financial credibility, determining insurance coverage, government surveillance, and identity theft). For each of the tracking methods, we present possible defenses. Apart from describing the methods and tools used for keeping the personal data away from being tracked, we also present several tools that were used for research purposes - their main goal is to discover how and by which entity the users are being tracked on their desktop computers or smartphones, provide this information to the users, and visualize it in an accessible and easy to follow way. Finally, we present the currently proposed future approaches to track the user and show that they can potentially pose significant threats to the users' privacy.Comment: 29 pages, 212 reference
    corecore