4,605 research outputs found

    Content and popularity analysis of Tor hidden services

    Get PDF
    Tor hidden services allow running Internet services while protecting the location of the servers. Their main purpose is to enable freedom of speech even in situations in which powerful adversaries try to suppress it. However, providing location privacy and client anonymity also makes Tor hidden services an attractive platform for every kind of imaginable shady service. The ease with which Tor hidden services can be set up has spurred a huge growth of anonymously provided Internet services of both types. In this paper we analyse the landscape of Tor hidden services. We have studied Tor hidden services after collecting 39824 hidden service descriptors on 4th of Feb 2013 by exploiting protocol and implementation flaws in Tor: we scanned them for open ports; in the case of HTTP services, we analysed and classified their content. We also estimated the popularity of hidden services by looking at the request rate for hidden service descriptors by clients. We found that while the content of Tor hidden services is rather varied, the most popular hidden services are related to botnets.Comment: 6 pages, 3 figures, 2 table

    Measuring and mitigating AS-level adversaries against Tor

    Full text link
    The popularity of Tor as an anonymity system has made it a popular target for a variety of attacks. We focus on traffic correlation attacks, which are no longer solely in the realm of academic research with recent revelations about the NSA and GCHQ actively working to implement them in practice. Our first contribution is an empirical study that allows us to gain a high fidelity snapshot of the threat of traffic correlation attacks in the wild. We find that up to 40% of all circuits created by Tor are vulnerable to attacks by traffic correlation from Autonomous System (AS)-level adversaries, 42% from colluding AS-level adversaries, and 85% from state-level adversaries. In addition, we find that in some regions (notably, China and Iran) there exist many cases where over 95% of all possible circuits are vulnerable to correlation attacks, emphasizing the need for AS-aware relay-selection. To mitigate the threat of such attacks, we build Astoria--an AS-aware Tor client. Astoria leverages recent developments in network measurement to perform path-prediction and intelligent relay selection. Astoria reduces the number of vulnerable circuits to 2% against AS-level adversaries, under 5% against colluding AS-level adversaries, and 25% against state-level adversaries. In addition, Astoria load balances across the Tor network so as to not overload any set of relays.Comment: Appearing at NDSS 201

    An Empirical Study of the I2P Anonymity Network and its Censorship Resistance

    Full text link
    Tor and I2P are well-known anonymity networks used by many individuals to protect their online privacy and anonymity. Tor's centralized directory services facilitate the understanding of the Tor network, as well as the measurement and visualization of its structure through the Tor Metrics project. In contrast, I2P does not rely on centralized directory servers, and thus obtaining a complete view of the network is challenging. In this work, we conduct an empirical study of the I2P network, in which we measure properties including population, churn rate, router type, and the geographic distribution of I2P peers. We find that there are currently around 32K active I2P peers in the network on a daily basis. Of these peers, 14K are located behind NAT or firewalls. Using the collected network data, we examine the blocking resistance of I2P against a censor that wants to prevent access to I2P using address-based blocking techniques. Despite the decentralized characteristics of I2P, we discover that a censor can block more than 95% of peer IP addresses known by a stable I2P client by operating only 10 routers in the network. This amounts to severe network impairment: a blocking rate of more than 70% is enough to cause significant latency in web browsing activities, while blocking more than 90% of peer IP addresses can make the network unusable. Finally, we discuss the security consequences of the network being blocked, and directions for potential approaches to make I2P more resistant to blocking.Comment: 14 pages, To appear in the 2018 Internet Measurement Conference (IMC'18

    Privacy Analysis of Online and Offline Systems

    Get PDF
    How to protect people's privacy when our life are banded together with smart devices online and offline? For offline systems like smartphones, we often have a passcode to prevent others accessing to our personal data. Shoulder-surfing attacks to predict the passcode by humans are shown to not be accurate. We thus propose an automated algorithm to accurately predict the passcode entered by a victim on her smartphone by recording the video. Our proposed algorithm is able to predict over 92% of numbers entered in fewer than 75 seconds with training performed once.For online systems like surfing on Internet, anonymous communications networks like Tor can help encrypting the traffic data to reduce the possibility of losing our privacy. Each Tor client telescopically builds a circuit by choosing three Tor relays and then uses that circuit to connect to a server. The Tor relay selection algorithm makes sure that no two relays with the same /16 IP address or Autonomous System (AS) are chosen. Our objective is to determine the popularity of Tor relays when building circuits. With over 44 vantage points and over 145,000 circuits built, we found that some Tor relays are chosen more often than others. Although a completely balanced selection algorithm is not possible, analysis of our dataset shows that some Tor relays are over 3 times more likely to be chosen than others. An adversary could potentially eavesdrop or correlate more Tor traffic.Further more, the effectiveness of website fingerprinting (WF) has been shown to have an accuracy of over 90% when using Tor as the anonymity network. The common assumption in previous work is that a victim is visiting one website at a time and has access to the complete network trace of that website. Our main concern about website fingerprinting is its practicality. Victims could visit another website in the middle of visiting one website (overlapping visits). Or an adversary may only get an incomplete network traffic trace. When two website visits are overlapping, the website fingerprinting accuracy falls dramatically. Using our proposed "sectioning" algorithm, the accuracy for predicting the website in overlapping visits improves from 22.80% to 70%. When part of the network trace is missing (either the beginning or the end), the accuracy when using our sectioning algorithm increases from 20% to over 60%
    • …
    corecore