32 research outputs found

    Enhancing System Transparency, Trust, and Privacy with Internet Measurement

    Full text link
    While on the Internet, users participate in many systems designed to protect their information’s security. Protection of the user’s information can depend on several technical properties, including transparency, trust, and privacy. Preserving these properties is challenging due to the scale and distributed nature of the Internet; no single actor has control over these features. Instead, the systems are designed to provide them, even in the face of attackers. However, it is possible to utilize Internet measurement to better defend transparency, trust, and privacy. Internet measurement allows observation of many behaviors of distributed, Internet-connected systems. These new observations can be used to better defend the system they measure. In this dissertation, I explore four contexts in which Internet measurement can be used to the aid of end-users in Internet-centric, adversarial settings. First, I improve transparency into Internet censorship practices by developing new Internet measurement techniques. Then, I use Internet measurement to enable the deployment of end-to-middle censorship circumvention techniques to a half-million users. Next, I evaluate transparency and improve trust in the Web public-key infrastructure by combining Internet measurement techniques and using them to augment core components of the Web public-key infrastructure. Finally, I evaluate browser extensions that provide privacy to users on the web, providing insight for designers and simple recommendations for end-users. By focusing on end-user concerns in widely deployed systems critical to end-user security and privacy, Internet measurement enables improvements to transparency, trust, and privacy.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163199/1/benvds_1.pd

    Towards more Effective Censorship Resistance Systems

    Get PDF
    Internet censorship resistance systems (CRSs) have so far been designed in an ad-hoc manner. The fundamentals are unclear and the foundations are shaky. Censors are, more and more, able to take advantage of this situation. Future censorship resistance systems ought to be built from strong theoretical underpinnings and be based on empirical evidence. Our approach is based on systematizing the CRS field and its players. Informed by this systematization we develop frameworks that have broad scope, from which we gain general insight as well as answers to specific questions. We develop theoretical and simulation-based analysis tools 1) for learning how to manipulate censor behavior using game-theoretic tactics, 2) for learning about CRS-client activity levels on CRS networks, and finally 3) for evaluating security parameters in CRS designs. We learn that there are gaps in the CRS designer's arsenal: certain censor attacks go unmitigated and the dynamics of the censorship arms race are not modeled. Our game-theoretic analysis highlights how managing the base rate of CRS traffic can cause stable equilibriums where the censor allows some amount of CRS communication to occur. We design and deploy a privacy-preserving data gathering tool, and use it to collect statistics to help answer questions about the prevalence of CRS-related traffic in actual CRS communication networks. Finally, our security evaluation of a popular CRS exposes suboptimal settings, which have since been optimized according to our recommendations. All of these contributions help support the thesis that more formal and empirically driven CRS designs can have better outcomes than the current state of the art

    Toward Open and Programmable Wireless Network Edge

    Get PDF
    Increasingly, the last hop connecting users to their enterprise and home networks is wireless. Wireless is becoming ubiquitous not only in homes and enterprises but in public venues such as coffee shops, hospitals, and airports. However, most of the publicly and privately available wireless networks are proprietary and closed in operation. Also, there is little effort from industries to move forward on a path to greater openness for the requirement of innovation. Therefore, we believe it is the domain of university researchers to enable innovation through openness. In this thesis work, we introduce and defines the importance of open framework in addressing the complexity of the wireless network. The Software Defined Network (SDN) framework has emerged as a popular solution for the data center network. However, the promise of the SDN framework is to make the network open, flexible and programmable. In order to deliver on the promise, SDN must work for all users and across all networks, both wired and wireless. Therefore, we proposed to create new modules and APIs to extend the standard SDN framework all the way to the end-devices (i.e., mobile devices, APs). Thus, we want to provide an extensible and programmable abstraction of the wireless network as part of the current SDN-based solution. In this thesis work, we design and develop a framework, weSDN (wireless extension of SDN), that extends the SDN control capability all the way to the end devices to support client-network interaction capabilities and new services. weSDN enables the control-plane of wireless networks to be extended to mobile devices and allows for top-level decisions to be made from an SDN controller with knowledge of the network as a whole, rather than device centric configurations. In addition, weSDN easily obtains user application information, as well as the ability to monitor and control application flows dynamically. Based on the weSDN framework, we demonstrate new services such as application-aware traffic management, WLAN virtualization, and security management

    Empirical and Analytical Perspectives on the Robustness of Blockchain-related Peer-to-Peer Networks

    Get PDF
    Die Erfindung von Bitcoin hat ein großes Interesse an dezentralen Systemen geweckt. Eine häufige Zuschreibung an dezentrale Systeme ist dabei, dass eine Dezentralisierung automatisch zu einer höheren Sicherheit und Widerstandsfähigkeit gegenüber Angriffen führt. Diese Dissertation widmet sich dieser Zuschreibung, indem untersucht wird, ob dezentralisierte Anwendungen tatsächlich so robust sind. Dafür werden exemplarisch drei Systeme untersucht, die häufig als Komponenten in komplexen Blockchain-Anwendungen benutzt werden: Ethereum als Infrastruktur, IPFS zur verteilten Datenspeicherung und schließlich "Stablecoins" als Tokens mit Wertstabilität. Die Sicherheit und Robustheit dieser einzelnen Komponenten bestimmt maßgeblich die Sicherheit des Gesamtsystems in dem sie verwendet werden; darüber hinaus erlaubt der Fokus auf Komponenten Schlussfolgerungen über individuelle Anwendungen hinaus. Für die entsprechende Analyse bedient sich diese Arbeit einer empirisch motivierten, meist Netzwerklayer-basierten Perspektive -- angereichert mit einer ökonomischen im Kontext von Wertstabilen Tokens. Dieses empirische Verständnis ermöglicht es Aussagen über die inhärenten Eigenschaften der studierten Systeme zu treffen. Ein zentrales Ergebnis dieser Arbeit ist die Entdeckung und Demonstration einer "Eclipse-Attack" auf das Ethereum Overlay. Mittels eines solchen Angriffs kann ein Angreifer die Verbreitung von Transaktionen und Blöcken behindern und Netzwerkteilnehmer aus dem Overlay ausschließen. Des weiteren wird das IPFS-Netzwerk umfassend analysiert und kartografiert mithilfe (1) systematischer Crawls der DHT sowie (2) des Mitschneidens von Anfragenachrichten für Daten. Erkenntlich wird hierbei, dass die hybride Overlay-Struktur von IPFS Segen und Fluch zugleich ist, da das Gesamtsystem zwar robust gegen Angriffe ist, gleichzeitig aber eine umfassende Überwachung der Netzwerkteilnehmer ermöglicht wird. Im Rahmen der wertstabilen Kryptowährungen wird ein Klassifikations-Framework vorgestellt und auf aktuelle Entwicklungen im Gebiet der "Stablecoins" angewandt. Mit diesem Framework wird somit (1) der aktuelle Zustand der Stablecoin-Landschaft sortiert und (2) ein Mittel zur Verfügung gestellt, um auch zukünftige Designs einzuordnen und zu verstehen.The inception of Bitcoin has sparked a large interest in decentralized systems. In particular, popular narratives imply that decentralization automatically leads to a high security and resilience against attacks, even against powerful adversaries. In this thesis, we investigate whether these ascriptions are appropriate and if decentralized applications are as robust as they are made out to be. To this end, we exemplarily analyze three widely-used systems that function as building blocks for blockchain applications: Ethereum as basic infrastructure, IPFS for distributed storage and lastly "stablecoins" as tokens with a stable value. As reoccurring building blocks for decentralized applications these examples significantly determine the security and resilience of the overall application. Furthermore, focusing on these building blocks allows us to look past individual applications and focus on inherent systemic properties. The analysis is driven by a strong empirical, mostly network-layer based perspective; enriched with an economic point of view in the context of monetary stabilization. The resulting practical understanding allows us to delve into the systems' inherent properties. The fundamental results of this thesis include the demonstration of a network-layer Eclipse attack on the Ethereum overlay which can be leveraged to impede the delivery of transaction and blocks with dire consequences for applications built on top of Ethereum. Furthermore, we extensively map the IPFS network through (1) systematic crawling of its DHT, as well as (2) monitoring content requests. We show that while IPFS' hybrid overlay structure renders it quite robust against attacks, this virtue of the overlay is simultaneously a curse, as it allows for extensive monitoring of participating peers and the data they request. Lastly, we exchange the network-layer perspective for a mostly economic one in the context of monetary stabilization. We present a classification framework to (1) map out the stablecoin landscape and (2) provide means to pigeon-hole future system designs. With our work we not only scrutinize ascriptions attributed to decentral technologies; we also reached out to IPFS and Ethereum developers to discuss results and remedy potential attack vectors

    StyleCounsel: Seeing the (Random) Forest for the Trees in Adversarial Code Stylometry

    Get PDF
    Authorship attribution has piqued the interest of scholars for centuries, but had historically remained a matter of subjective opinion, based upon examination of handwriting and the physical document. Midway through the 20th Century, a technique known as stylometry was developed, in which the content of a document is analyzed to extract the author's grammar use, preferred vocabulary, and other elements of compositional style. In parallel to this, programmers, and particularly those involved in education, were writing and testing systems designed to automate the analysis of good coding style and best practice, in order to assist with grading assignments. In the aftermath of the Morris Worm incident in 1988, researchers began to consider whether this automated analysis of program style could be combined with stylometry techniques and applied to source code, to identify the author of a program. The results of recent experiments have suggested this code stylometry can successfully identify the author of short programs from among hundreds of candidates with up to 98\% precision. This potential ability to discern the programmer of a sample of code from a large group of possible authors could have concerning consequences for the open-source community at large, particularly those contributors that may wish to remain anonymous. Recent international events have suggested the developers of certain anti-censorship and anti-surveillance tools are being targeted by their governments and forced to delete their repositories or face prosecution. In light of this threat to the freedom and privacy of individual programmers around the world, and due to a dearth of published research into practical code stylometry at scale and its feasibility, we carried out a number of investigations looking into the difficulties of applying this technique in the real world, and how one might effect a robust defence against it. To this end, we devised a system to aid programmers in obfuscating their inherent style and imitating another, overt, author's style in order to protect their anonymity from this forensic technique. Our system utilizes the implicit rules encoded in the decision points of a random forest ensemble in order to derive a set of recommendations to present to the user detailing how to achieve this obfuscation and mimicry attack. In order to best test this system, and simultaneously assess the difficulties of performing practical stylometry at scale, we also gathered a large corpus of real open-source software and devised our own feature set including both novel attributes and those inspired or borrowed from other sources. Our results indicate that attempting a mass analysis of publicly available source code is fraught with difficulties in ensuring the integrity of the data. Furthermore, we found ours and most other published feature sets do not sufficiently capture an author's style independently of the content to be very effective at scale, although its accuracy is significantly greater than a random guess. Evaluations of our tool indicate it can successfully extract a set of changes that would result in a misclassification as another user if implemented. More importantly, this extraction was independent of the specifics of the feature set, and therefore would still work even with a more accurate model of style. We ran a limited user study to assess the usability of the tool, and found overall it was beneficial to our participants, and could be even more beneficial if the valuable feedback we received were implemented in future work

    Last-Mile TLS Interception: Analysis and Observation of the Non-Public HTTPS Ecosystem

    Get PDF
    Transport Layer Security (TLS) is one of the most widely deployed cryptographic protocols on the Internet that provides confidentiality, integrity, and a certain degree of authenticity of the communications between clients and servers. Following Snowden's revelations on US surveillance programs, the adoption of TLS has steadily increased. However, encrypted traffic prevents legitimate inspection. Therefore, security solutions such as personal antiviruses and enterprise firewalls may intercept encrypted connections in search for malicious or unauthorized content. Therefore, the end-to-end property of TLS is broken by these TLS proxies (a.k.a. middleboxes) for arguably laudable reasons; yet, may pose a security risk. While TLS clients and servers have been analyzed to some extent, such proxies have remained unexplored until recently. We propose a framework for analyzing client-end TLS proxies, and apply it to 14 consumer antivirus and parental control applications as they break end-to-end TLS connections. Overall, the security of TLS connections was systematically worsened compared to the guarantees provided by modern browsers. Next, we aim at exploring the non-public HTTPS ecosystem, composed of locally-trusted proxy-issued certificates, from the user's perspective and from several countries in residential and enterprise settings. We focus our analysis on the long tail of interception events. We characterize the customers of network appliances, ranging from small/medium businesses and institutes to hospitals, hotels, resorts, insurance companies, and government agencies. We also discover regional cases of traffic interception malware/adware that mostly rely on the same Software Development Kit (i.e., NetFilter). Our scanning and analysis techniques allow us to identify more middleboxes and intercepting apps than previously found from privileged server vantages looking at billions of connections. We further perform a longitudinal study over six years of the evolution of a prominent traffic-intercepting adware found in our dataset: Wajam. We expose the TLS interception techniques it has used and the weaknesses it has introduced on hundreds of millions of user devices. This study also (re)opens the neglected problem of privacy-invasive adware, by showing how adware evolves sometimes stronger than even advanced malware and poses significant detection and reverse-engineering challenges. Overall, whether beneficial or not, TLS interception often has detrimental impacts on security without the end-user being alerted

    Data Politics

    Get PDF
    Data has become a social and political issue because of its capacity to reconfigure relationships between states, subjects, and citizens. This book explores how data has acquired such an important capacity and examines how critical interventions in its uses in both theory and practice are possible. Data and politics are now inseparable: data is not only shaping our social relations, preferences and life chances but our very democracies. Expert international contributors consider political questions about data and the ways it provokes subjects to govern themselves by making rights claims. Concerned with the things (infrastructures of servers, devices, and cables) and language (code, programming, and algorithms) that make up cyberspace, this book demonstrates that without understanding these conditions of possibility it is impossible to intervene in or to shape data politics. Aimed at academics and postgraduate students interested in political aspects of data, this volume will also be of interest to experts in the fields of internet studies, international studies, Big Data, digital social sciences and humanities

    ISCR Annual Report: Fical Year 2004

    Full text link
    corecore