53 research outputs found
Performance and Security Improvements for Tor: A Survey
Tor [Dingledine et al. 2004] is the most widely used anonymity network today, serving millions of users on a daily basis using a growing number of volunteer-run routers. Since its deployment in 2003, there have been more than three dozen proposals that aim to improve its performance, security, and unobservability. Given the significance of this research area, our goal is to provide the reader with the state of current research
directions and challenges in anonymous communication systems, focusing on the Tor network.We shed light on the design weaknesses and challenges facing the network and point out unresolved issues
Emoji Company GmbH v Schedule A Defendants
Declaration of Dean Eric Goldma
Emoji Company GmbH v Schedule A Defendants
Declaration of Dean Eric Goldma
Security Hazards when Law is Code.
As software continues to eat the world, there is an increasing pressure to
automate every aspect of society, from self-driving cars, to algorithmic trading
on the stock market. As this pressure manifests into software implementations
of everything, there are security concerns to be addressed across many areas.
But are there some domains and fields that are distinctly susceptible to attacks,
making them difficult to secure?
My dissertation argues that one domain in particular—public policy and law—
is inherently difficult to automate securely using computers. This is in large part
because law and policy are written in a manner that expects them to be flexibly
interpreted to be fair or just. Traditionally, this interpreting is done by judges
and regulators who are capable of understanding the intent of the laws they are
enforcing. However, when these laws are instead written in code, and interpreted
by a machine, this capability to understand goes away. Because they blindly fol-
low written rules, computers can be tricked to perform actions counter to their
intended behavior.
This dissertation covers three case studies of law and policy being implemented
in code and security vulnerabilities that they introduce in practice. The first study
analyzes the security of a previously deployed Internet voting system, showing
how attackers could change the outcome of elections carried out online. The second study looks at airport security, investigating how full-body scanners can be
defeated in practice, allowing attackers to conceal contraband such as weapons or
high explosives past airport checkpoints. Finally, this dissertation also studies how
an Internet censorship system such as China’s Great Firewall can be circumvented
by techniques that exploit the methods employed by the censors themselves.
To address these concerns of securing software implementations of law, a hybrid human-computer approach can be used. In addition, systems should be designed to allow for attacks or mistakes to be retroactively undone or inspected by
human auditors. By combining the strengths of computers (speed and cost) and
humans (ability to interpret and understand), systems can be made more secure
and more efficient than a method employing either alone.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120795/1/ewust_1.pd
Recommended from our members
Practical Countermeasures Against Network Censorship
Governments around the world threaten free communication on the Internet by building increasingly complex systems to carry out Network Censorship. Network Censorship undermines citizens' ability to access websites and services of their preference, damages freedom of the press and self-expression, and threatens public safety, motivating the development of censorship circumvention tools.
Inevitably, censors respond by detecting and blocking those tools, using a wide range of techniques including Enumeration Attacks, Deep Packet Inspection, Traffic Fingerprinting, and Active Probing. In this dissertation, I study some of the most common attacks, actually adopted by censors in practice, and propose novel attacks to assist in the development of defenses against them. I describe practical countermeasures against those attacks, which often rely on empiric measurements of real-world data to maximize their efficiency. This dissertation also reports how this work has been successfully deployed to several popular censorship circumvention tools to help censored Internet users break free of the repressive information control.</p
Recommended from our members
Privacy in Centralized Systems
The vast majority of online services are run using a centralized infrastructure. The centralized nature of these services allow the provider to have absolute control over the content as well as any profit generated by that content. Centralized services often have servers distributed across the world for user reliability, though they function as centralized systems: their functionality is identical across their network and the data they collect is available to the service as a whole, not only the server a user interacts with. Users of these services, and the data they generate, are completely at the whim of the provider; companies often offer vague promises of security and privacy of the data collected from their users which the users cannot verify themselves. Moving these services to a decentralized system (where each server acts independently of others) could address these issues but decentralized systems often face severe scalability issues as well as having cumbersome requirements on users such as needing specialized software to access the service.
This dissertation demonstrates that user privacy can be inherently built into centralized systems using cryptographic protocols. Centralized systems can offer their services with minimal user information allowing services actually geared towards privacy to have it be a core functionality of their service. A service that doesn't have access to users data cannot abuse or leak it.
Proof of Censorship utilizes Private Information Retrieval to allow content providers (such as Twitter) to be cryptographically auditable over whether content has been modified or removed. In 2018 Signal (a secure end-to-end messenger) introduced Sealed Sender in an attempt to hide the sender of encrypted messages. Improving Signal's Sealed Sender strengths Sealed Sender by guaranteeing that privacy cryptographically using blind RSA signatures. Mind the IP Gap measures how countries utilize their centralized censorship apparatus to restrict content to users through DNS manipulation.</p
Security and Privacy for the Modern World
The world is organized around technology that does not respect its users. As a precondition of participation in digital life, users cede control of their data to third-parties with murky motivations, and cannot ensure this control is not mishandled or abused. In this work, we create secure, privacy-respecting computing for the average user by giving them the tools to guarantee their data is shielded from prying eyes. We first uncover the side channels present when outsourcing scientific computation to the cloud, and address them by building a data-oblivious virtual environment capable of efficiently handling these workloads. Then, we explore stronger privacy protections for interpersonal communication through practical steganography, using it to hide sensitive messages in realistic cover distributions like English text. Finally, we discuss at-home cryptography, and leverage it to bind a user’s access to their online services and important files to a secure location, such as their smart home. This line of research represents a new model of digital life, one that is both full-featured and protected against the security and privacy threats of the modern world
Internet-Wide Evaluations of Security and Vulnerabilities
The Internet significantly impacts the world culture. Since the beginning, it is a multilayered system, which is even gaining more protocols year by year. At its core, remains the Internet protocol suite, where the fundamental protocols such as IPv4, TCP/UDP, DNS are initially introduced. Recently, more and more cross-layer attacks involving features in multiple layers are reported. To better understand these attacks, e.g. how practical they are and how many users are vulnerable, Internet-wide evaluations are essential.
In this cumulative thesis, we summarise our findings from various Internet-wide evaluations in recent years, with a main focus on DNS. Our evaluations showed that IP fragmentation poses a practical threat to DNS security, regardless of the transport protocol (TCP or UDP). Defense mechanisms such as DNS Response Rate Limiting could facilitate attacks on DNS, even if they are designed to protect DNS. We also extended the evaluations to a fundamental system which heavily relies on DNS, the web PKI. We found that Certificate Authorities suffered a lot from previous DNS vulnerabilities. We demonstrated that off-path attackers could hijack accounts at major CAs and manipulate resources there, with various DNS cache poisoning attacks. The Domain Validation procedure faces similar vulnerabilities. Even the latest Multiple-Validation-Agent DV could be downgraded and poisoned.
On the other side, we also performed Internet-wide evaluations of two important defence mechanisms. One is the cryptographic protocol for DNS security, called DNSSEC. We found that only less than 2% of popular domains were signed, among which about 20% were misconfigured. This is another example showing how poorly deployed defence mechanisms worsen the security. The other is ingress filtering, which stops spoofed traffic from entering a network. We presented the most completed Internet-wide evaluations of ingress filtering, which covered over 90% of all Autonomous Systems. We found that over 80% of them were vulnerable to inbound spoofing
Experimentation and Characterization of Mobile Broadband Networks
The Internet has brought substantial changes to our life as the main tool to access a large variety of services and applications. Internet distributed nature and technological improvements lead to new challenges for researchers, service providers, and network administrators. Internet traffic measurement and analysis is one of the most trivial and powerful tools to study such a complex environment from different aspects. Mobile BroadBand (MBB) networks have become one of the main means to access the Internet. MBB networks are evolving at a rapid pace with technology enhancements that promise drastic improvements in capacity, connectivity, and coverage, i.e., better performance in general. Open experimentation with operational MBB networks in the wild is currently a fundamental requirement of the research community in its endeavor to address the need for innovative solutions for mobile communications. There is a strong need for objective data relating to stability and performance of MBB (e.g., 2G, 3G, 4G, and soon-to-come 5G) networks and for tools that rigorously and scientifically assess their performance. Thus, measuring end user performance in such an environment is a challenge that calls for large-scale measurements and profound analysis of the collected data. The intertwining of technologies, protocols, and setups makes it even more complicated to design scientifically sound and robust measurement campaigns. In such a complex scenario, the randomness of the wireless access channel coupled with the often unknown operator configurations makes this scenario even more challenging. In this thesis, we introduce the MONROE measurement platform: an open access and flexible hardware-based platform for measurements on operational MBB networks. The MONROE platform enables accurate, realistic, and meaningful assessment of the performance and reliability of MBB networks. We detail the challenges we overcame while building and testing the MONROE testbed and argue our design and implementation choices accordingly. Measurements are designed
to stress performance of MBB networks at different network layers by proposing scalable experiments and methodologies. We study: (i) Network layer performance, characterizing and possibly estimating the download speed offered by commercial MBB networks; (ii) End users’ Quality of Experience (QoE), specifically targeting the web performance of HTTP1.1/TLS and HTTP2 on various popular web sites; (iii) Implication of roaming in Europe, understanding the roaming ecosystem in Europe after the "Roam like Home" initiative; and (iv) A novel adaptive scheduler family
with deadline is proposed for multihomed devices that only require a very coarse knowledge of the wireless bandwidth. Our results comprise different contributions in the scope of each research topic. To put it in a nutshell, we pinpoint the impact of different network configurations that further complicate the picture and hopefully contribute to the debate about performance assessment in MBB networks. The MBB users web performance shows that HTTP1.1/TLS is very similar to HTTP2 in our large-scale measurements. Furthermore, we observe that roaming is well supported for the monitored operators and the operators using the same approach for routing roaming traffic. The proposed adaptive schedulers for content upload in multihomed devices are evaluated in
both numerical simulations and real mobile nodes. Simulation results show that the adaptive solutions can effectively leverage the fundamental tradeoff between the upload cost and completion time, despite unpredictable variations in available bandwidth of wireless interfaces. Experiments in the real mobile nodes provided by the MONROE platform confirm the findings
- …