162 research outputs found

    Novel approaches to anonymity and privacy in decentralized, open settings

    Get PDF
    The Internet has undergone dramatic changes in the last two decades, evolving from a mere communication network to a global multimedia platform in which billions of users actively exchange information. While this transformation has brought tremendous benefits to society, it has also created new threats to online privacy that existing technology is failing to keep pace with. In this dissertation, we present the results of two lines of research that developed two novel approaches to anonymity and privacy in decentralized, open settings. First, we examine the issue of attribute and identity disclosure in open settings and develop the novel notion of (k,d)-anonymity for open settings that we extensively study and validate experimentally. Furthermore, we investigate the relationship between anonymity and linkability using the notion of (k,d)-anonymity and show that, in contrast to the traditional closed setting, anonymity within one online community does necessarily imply unlinkability across different online communities in the decentralized, open setting. Secondly, we consider the transitive diffusion of information that is shared in social networks and spread through pairwise interactions of user connected in this social network. We develop the novel approach of exposure minimization to control the diffusion of information within an open network, allowing the owner to minimize its exposure by suitably choosing who they share their information with. We implement our algorithms and investigate the practical limitations of user side exposure minimization in large social networks. At their core, both of these approaches present a departure from the provable privacy guarantees that we can achieve in closed settings and a step towards sound assessments of privacy risks in decentralized, open settings.Das Internet hat in den letzten zwei Jahrzehnten eine drastische Transformation erlebt und entwickelte sich dabei von einem einfachen Kommunikationsnetzwerk zu einer globalen Multimedia Plattform auf der Milliarden von Nutzern aktiv Informationen austauschen. Diese Transformation hat zwar einen gewaltigen Nutzen und vielfältige Vorteile für die Gesellschaft mit sich gebracht, hat aber gleichzeitig auch neue Herausforderungen und Gefahren für online Privacy mit sich gebracht mit der die aktuelle Technologie nicht mithalten kann. In dieser Dissertation präsentieren wir zwei neue Ansätze für Anonymität und Privacy in dezentralisierten und offenen Systemen. Mit unserem ersten Ansatz untersuchen wir das Problem der Attribut- und Identitätspreisgabe in offenen Netzwerken und entwickeln hierzu den Begriff der (k, d)-Anonymität für offene Systeme welchen wir extensiv analysieren und anschließend experimentell validieren. Zusätzlich untersuchen wir die Beziehung zwischen Anonymität und Unlinkability in offenen Systemen mithilfe des Begriff der (k, d)-Anonymität und zeigen, dass, im Gegensatz zu traditionell betrachteten, abgeschlossenen Systeme, Anonymität innerhalb einer Online Community nicht zwingend die Unlinkability zwischen verschiedenen Online Communitys impliziert. Mit unserem zweiten Ansatz untersuchen wir die transitive Diffusion von Information die in Sozialen Netzwerken geteilt wird und sich dann durch die paarweisen Interaktionen von Nutzern durch eben dieses Netzwerk ausbreitet. Wir entwickeln eine neue Methode zur Kontrolle der Ausbreitung dieser Information durch die Minimierung ihrer Exposure, was dem Besitzer dieser Information erlaubt zu kontrollieren wie weit sich deren Information ausbreitet indem diese initial mit einer sorgfältig gewählten Menge von Nutzern geteilt wird. Wir implementieren die hierzu entwickelten Algorithmen und untersuchen die praktischen Grenzen der Exposure Minimierung, wenn sie von Nutzerseite für große Netzwerke ausgeführt werden soll. Beide hier vorgestellten Ansätze verbindet eine Neuausrichtung der Aussagen die diese bezüglich Privacy treffen: wir bewegen uns weg von beweisbaren Privacy Garantien für abgeschlossene Systeme, und machen einen Schritt zu robusten Privacy Risikoeinschätzungen für dezentralisierte, offene Systeme in denen solche beweisbaren Garantien nicht möglich sind

    A Framework for Preserving Privacy and Cybersecurity in Brain-Computer Interfacing Applications

    Full text link
    Brain-Computer Interfaces (BCIs) comprise a rapidly evolving field of technology with the potential of far-reaching impact in domains ranging from medical over industrial to artistic, gaming, and military. Today, these emerging BCI applications are typically still at early technology readiness levels, but because BCIs create novel, technical communication channels for the human brain, they have raised privacy and security concerns. To mitigate such risks, a large body of countermeasures has been proposed in the literature, but a general framework is lacking which would describe how privacy and security of BCI applications can be protected by design, i.e., already as an integral part of the early BCI design process, in a systematic manner, and allowing suitable depth of analysis for different contexts such as commercial BCI product development vs. academic research and lab prototypes. Here we propose the adoption of recent systems-engineering methodologies for privacy threat modeling, risk assessment, and privacy engineering to the BCI field. These methodologies address privacy and security concerns in a more systematic and holistic way than previous approaches, and provide reusable patterns on how to move from principles to actions. We apply these methodologies to BCI and data flows and derive a generic, extensible, and actionable framework for brain-privacy-preserving cybersecurity in BCI applications. This framework is designed for flexible application to the wide range of current and future BCI applications. We also propose a range of novel privacy-by-design features for BCIs, with an emphasis on features promoting BCI transparency as a prerequisite for informational self-determination of BCI users, as well as design features for ensuring BCI user autonomy. We anticipate that our framework will contribute to the development of privacy-respecting, trustworthy BCI technologies

    Privacy-preserving Cooperative Services for Smart Traffic

    Get PDF
    Communication technology and the increasing intelligence of things enable new qualities of cooperation. However, it is often unclear how complex functionality can be realized in a reliable and abuse-resistant manner without harming users\u27 privacy in the face of strong adversaries. This thesis focuses on three functional building blocks that are especially challenging in this respect: cooperative planning, geographic addressing and the decentralized provision of pseudonymous identifiers

    Beyond a Fistful of Tumblers: Toward a Taxonomy of Ethereum-based Mixers

    Get PDF
    The role played by decentralised services in the obfuscation of crypto-asset transactions performed on transparent blockchains has increasingly captured the attention of regulators. This is exemplified by the headlines about the U.S. Treasury\u27s sanctions on the Ethereum-based mixer Tornado Cash. Yet, despite the existing controversies on the use of mixers, the different functionalities of these information systems with an inherent dark side remain to be explored by the literature. So far, contributions primarily encompass technical works and studies that focus on the Bitcoin ecosystem. This paper puts forward a multi-layer taxonomy of the smart-contract-based - and, therefore, functionally richer - family of mixers on Ethereum. Our proposed taxonomy is grounded on (1) a review of existing literature, (2) an analysis of mixers\u27 project documentation, (3) their corresponding smart contracts, and (4) expert interviews. The evaluation included the application of the taxonomy to two mixers - RAILGUN and zkBob. Our taxonomy represents a valuable tool for law enforcement, regulators, and other stakeholders to explore critical properties affecting compliance and use of Ethereum-based mixers

    Effective Privacy-Preserving Mechanisms for Vehicle-to-Everything Services

    Get PDF
    Owing to the advancement of wireless communication technologies, drivers can rely on smart connected vehicles to communicate with each other, roadside units, pedestrians, and remote service providers to enjoy a large amount of vehicle-to-everything (V2X) services, including navigation, parking, ride hailing, and car sharing. These V2X services provide different functions for bettering travel experiences, which have a bunch of benefits. In the real world, even without smart connected vehicles, drivers as users can utilize their smartphones and mobile applications to access V2X services and connect their smartphones to vehicles through some interfaces, e.g., IOS Carplay and Android Auto. In this way, they can still enjoy V2X services through modern car infotainment systems installed on vehicles. Most of the V2X services are data-centric and data-intensive, i.e., users have to upload personal data to a remote service provider, and the service provider can continuously collect a user's data and offer personalized services. However, the data acquired from users may include users' sensitive information, which may expose user privacy and cause serious consequences. To protect user privacy, a basic privacy-preserving mechanism, i.e, anonymization, can be applied in V2X services. Nevertheless, a big obstacle arises as well: user anonymization may affect V2X services' availability. As users become anonymous, users may behave selfishly and maliciously to break the functions of a V2X service without being detected and the service may become unavailable. In short, there exist a conflict between privacy and availability, which is caused by different requirements of users and service providers. In this thesis, we have identified three major conflicts between privacy and availability for V2X services: privacy vs. linkability, privacy vs. accountability, privacy vs. reliability, and then have proposed and designed three privacy-preserving mechanisms to resolve these conflicts. Firstly, the thesis investigates the conflict between privacy and linkability in an automated valet parking (AVP) service, where users can reserve a parking slot for their vehicles such that vehicles can achieve automated valet parking. As an optional privacy-preserving measure, users can choose to anonymize their identities when booking a parking slot for their vehicles. In this way, although user privacy is protected by anonymization, malicious users can repeatedly send parking reservation requests to a parking service provider to make the system unavailable (i.e., "Double-Reservation Attack"). Aiming at this conflict, a security model is given in the thesis to clearly define necessary privacy requirements and potential attacks in an AVP system, and then a privacy-preserving reservation scheme has been proposed based on BBS+ signature and zero-knowledge proof. In the proposed scheme, users can keep anonymous since users only utilize a one-time unlinkable token generated from his/her anonymous credential to achieve parking reservations. In the meantime, by utilizing proxy re-signature, the scheme can also guarantee that one user can only have one token at a time to resist against "Double-Reservation Attack". Secondly, the thesis investigates the conflict between privacy and accountability in a car sharing service, where users can conveniently rent a shared car without human intervention. One basic demand for car sharing service is to check the user's identity to determine his/her validity and enable the user to be accountable if he/she did improper behavior. If the service provider allows users to hide their identities and achieve anonymization to protect user privacy, naturally the car sharing service is unavailable. Aiming at this conflict, a decentralized, privacy-preserving, and accountable car sharing architecture has been proposed in the thesis, where multiple dynamic validation servers are employed to build decentralized trust for users. Under this architecture, the thesis proposes a privacy-preserving identity management scheme to assist in managing users' identities in a dynamic manner based on a verifiable secret sharing/redistribution technique, i.e. the validation servers who manage users' identities are dynamically changed with the time advancing. Moreover, the scheme enables a majority of dynamic validation servers to recover the misbehaving users' identities and guarantees that honest users' identities are confidential to achieve privacy preservation and accountability at the same time. Thirdly, the thesis investigates the conflict between privacy and reliability in a road condition monitoring service, where users can report road conditions to a monitoring service provider to help construct a live map based on crowdsourcing. Usually, a reputation-based mechanism is applied in the service to measure a user's reliability. However, this mechanism cannot be easily integrated with a privacy-preserving mechanism based on user anonymization. When users are anonymous, they can upload arbitrary reports to destroy the service quality and make the service unavailable. Aiming at this conflict, a privacy-preserving crowdsourcing-based road condition monitoring scheme has been proposed in the thesis. By leveraging homomorphic commitments and PS signature, the scheme supports anonymous user reputation management without the assistance of any third-party authority. Furthermore, the thesis proposes several zero-knowledge proof protocols to ensure that a user can keep anonymous and unlinkable but a monitoring service provider can still judge the reliability of this user's report through his/her reputation score. To sum up, with more attention being paid to privacy issues, how to protect user privacy for V2X services becomes more significant. The thesis proposes three effective privacy-preserving mechanisms for V2X services, which resolve the conflict between privacy and availability and can be conveniently integrated into current V2X applications since no trusted third party authority is required. The proposed approaches should be valuable for achieving practical privacy preservation in V2X services

    Credit Network Payment Systems: Security, Privacy and Decentralization

    Get PDF
    A credit network models transitive trust between users and enables transactions between arbitrary pairs of users. With their flexible design and robustness against intrusions, credit networks form the basis of Sybil-tolerant social networks, spam-resistant communication protocols, and payment settlement systems. For instance, the Ripple credit network is used today by various banks worldwide as their backbone for cross-currency transactions. Open credit networks, however, expose users’ credit links as well as the transaction volumes to the public. This raises a significant privacy concern, which has largely been ignored by the research on credit networks so far. In this state of affairs, this dissertation makes the following contributions. First, we perform a thorough study of the Ripple network that analyzes and characterizes its security and privacy issues. Second, we define a formal model for the security and privacy notions of interest in a credit network. This model lays the foundations for secure and privacy-preserving credit networks. Third, we build PathShuffle, the first protocol for atomic and anonymous transactions in credit networks that is fully compatible with the currently deployed Ripple and Stellar credit networks. Finally, we build SilentWhispers, the first provably secure and privacy-preserving transaction protocol for decentralized credit networks. SilentWhispers can be used to simulate Ripple transactions while preserving the expected security and privacy guarantees

    MLCapsule: Guarded Offline Deployment of Machine Learning as a Service

    Full text link
    With the widespread use of machine learning (ML) techniques, ML as a service has become increasingly popular. In this setting, an ML model resides on a server and users can query it with their data via an API. However, if the user's input is sensitive, sending it to the server is undesirable and sometimes even legally not possible. Equally, the service provider does not want to share the model by sending it to the client for protecting its intellectual property and pay-per-query business model. In this paper, we propose MLCapsule, a guarded offline deployment of machine learning as a service. MLCapsule executes the model locally on the user's side and therefore the data never leaves the client. Meanwhile, MLCapsule offers the service provider the same level of control and security of its model as the commonly used server-side execution. In addition, MLCapsule is applicable to offline applications that require local execution. Beyond protecting against direct model access, we couple the secure offline deployment with defenses against advanced attacks on machine learning models such as model stealing, reverse engineering, and membership inference
    • …
    corecore