15 research outputs found
Recommended from our members
2007 Circumvention Landscape Report: Methods, Uses, and Tools
As the Internet has exploded over the past fifteen years, recently reaching over a billion users, dozens of national governments from China to Saudi Arabia have tried to control the network by filtering out content objectionable to the countries for any of a number of reasons. A large variety of different projects have developed tools that can be used to circumvent this filtering, allowing people in filtered countries access to otherwise filtered content. In this report, we describe the mechanisms of filtering and circumvention and evaluate ten projects that develop tools that can be used to circumvent filtering: Anonymizer, Ultrareach, DynaWeb Freegate, Circumventor/CGIProxy, Psiphon, Tor, JAP, Coral, and Hamachi. We evaluated these tools in 2007 -- using both tests from within filtered countries and tests within a lab environment -- for their utility, usability, security, promotion, sustainability, and openness. We find that all of the tools use the same basic mechanisms of proxying and encryption but that they differ in their models of hosting proxies. Some tools use proxies that are centrally hosted, others use proxies that are peer hosted, and others use re-routing methods that use a combination of the two. We find that, in general, the tools work in the sense that they allow users to access pages that are otherwise blocked by filtering countries but that performance of the tools is generally poor and that many tools have significant, unreported security vulnerabilities.
The report was completed in 2007 and released to a group of private sponsors. Many of the findings of the report are now out of date, but we present them now, as is, because we think that the broad conclusions of the report about these tools remain valid and because we hope that other researchers will benefit from access to the methods used to test the tools.
Responses from developers of the tools in question are included in the report
Bugging Out: Darknets as Parasites of Large-scale Media Objects
Platforms and infrastructures have quickly become seminal concepts to understand large-scale computational systems. The difference between a platform and an infrastructure is subject to debate. In this paper, we use the concept of the darknet to describe how infrastructure tends toward being public with other things where platforms tend to private relations. The darknet reveals these relations negatively, as we discuss, by turning these media objects into that which they desire not to be. We analyze these negative relations through the concept of the parasite developed by Michel Serres. Through following how darknets parasite both platforms and infrastructure, we suggest a need to develop new concepts to understand the diversity of relations now possible in a network society
Routing in anonymous networks as a means to prevent traffic analysis
Traditionally, traffic analysis is something that has been used to measure and keep track of a network's situation regarding network congestion, networking hardware failures, etc. However, largely due to commercial interests such as targeted advertisement, traffic analysis techniques can also be used to identify and track a single user's movements within the Internet.
To counteract this perceived breach of privacy and anonymity, several counters have been developed over time, e.g. proxies used to obfuscate the true source of traffic, making it harder for others to pinpoint your location. Another approach has been the development of so called anonymous overlay networks, application-level virtual networks running on top of the physical IP network. The core concept is that by the way of encryption and obfuscation of traffic patterns, the users of such anonymous networks will gain anonymity and protection against traffic analysis techniques.
In this master's thesis we will be taking a look at how message forwarding or packet routing in IP networks functions and how this is exploited in different analysis techniques to single out a visitor to a website or just someone with a message being forwarded through a network device used for traffic analysis. After that we will discuss some examples of anonymous overlay networks and see how well they protect their users from traffic analysis, and how do their respective models hold up against traffic analysis attacks from a malicious entity. Finally, we will present a case study about Tor network's popularity by running a Tor relay node and gathering information on how much data the relay transmits and from where does the traffic originate.
CCS-concepts:
- Security and privacy ~ Privacy protections
- Networks ~ Overlay and other logical network structures
- Information systems ~ Traffic analysi
Hardening Tor Hidden Services
Tor is an overlay anonymization network that provides anonymity for clients surfing the web but also allows hosting anonymous services called hidden services. These enable whistleblowers and political activists to express their opinion and resist censorship. Administrating a hidden service is not trivial and requires extensive knowledge because Tor uses a comprehensive protocol and relies on volunteers. Meanwhile, attackers can spend significant resources to decloak them. This thesis aims to improve the security of hidden services by providing practical guidelines and a theoretical architecture. First, vulnerabilities specific to hidden services are analyzed by conducting an academic literature review. To model realistic real-world attackers, court documents are analyzed to determine their procedures. Both literature reviews classify the identified vulnerabilities into general categories.
Afterward, a risk assessment process is introduced, and existing risks for hidden services and their operators are determined. The main contributions of this thesis are practical guidelines for hidden service operators and a theoretical architecture. The former provides operators with a good overview of practices to mitigate attacks. The latter is a comprehensive infrastructure that significantly increases the security of hidden services and alleviates problems in the Tor protocol. Afterward, limitations and the transfer into practice are analyzed. Finally, future research possibilities are determined
Policing virtual spaces: public and private online challenges in a legal perspective
The chapter concerns public and private policing of online platforms and the current challenges in terms of legislation, policing practices and the Dark Web
Increasing the robustness of networked systems
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (p. 133-143).What popular news do you recall about networked systems? You've probably heard about the several hour failure at Amazon's computing utility that knocked down many startups for several hours, or the attacks that forced the Estonian government web-sites to be inaccessible for several days, or you may have observed inexplicably slow responses or errors from your favorite web site. Needless to say, keeping networked systems robust to attacks and failures is an increasingly significant problem. Why is it hard to keep networked systems robust? We believe that uncontrollable inputs and complex dependencies are the two main reasons. The owner of a web-site has little control on when users arrive; the operator of an ISP has little say in when a fiber gets cut; and the administrator of a campus network is unlikely to know exactly which switches or file-servers may be causing a user's sluggish performance. Despite unpredictable or malicious inputs and complex dependencies we would like a network to self-manage itself, i.e., diagnose its own faults and continue to maintain good performance. This dissertation presents a generic approach to harden networked systems by distinguishing between two scenarios. For systems that need to respond rapidly to unpredictable inputs, we design online solutions that re-optimize resource allocation as inputs change. For systems that need to diagnose the root cause of a problem in the presence of complex subsystem dependencies, we devise techniques to infer these dependencies from packet traces and build functional representations that facilitate reasoning about the most likely causes for faults. We present a few solutions, as examples of this approach, that tackle an important class of network failures. Specifically, we address (1) re-routing traffic around congestion when traffic spikes or links fail in internet service provider networks, (2) protecting websites from denial of service attacks that mimic legitimate users and (3) diagnosing causes of performance problems in enterprises and campus-wide networks. Through a combination of implementations, simulations and deployments, we show that our solutions advance the state-of-the-art.by Srikanth Kandula.Ph.D
Policing virtual spaces: public and private online challenges in a legal perspective
The chapter concerns public and private policing of online platforms and the current challenges in terms of legislation, policing practices and the Dark Web