26,630 research outputs found

    Privacy-preserving network path validation

    Get PDF
    The end-users communicating over a network path currently have no control over the path. For a better quality of service, the source node often opts for a superior (or premium) network path in order to send packets to the destination node. However, the current Internet architecture provides no assurance that the packets indeed follow the designated path. Network path validation schemes address this issue and enable each node present on a network path to validate whether each packet has followed the specific path so far. In this work, we introduce two notions of privacy -- path privacy and index privacy -- in the context of network path validation. We show that, in case a network path validation scheme does not satisfy these two properties, the scheme is vulnerable to certain practical attacks (that affect the reliability, neutrality and quality of service offered by the underlying network). To the best of our knowledge, ours is the first work that addresses privacy issues related to network path validation. We design PrivNPV, a privacy-preserving network path validation protocol, that satisfies both path privacy and index privacy. We discuss several attacks related to network path validation and how PrivNPV defends against these attacks. Finally, we discuss the practicality of PrivNPV based on relevant parameters

    Synthetic sequence generator for recommender systems - memory biased random walk on sequence multilayer network

    Full text link
    Personalized recommender systems rely on each user's personal usage data in the system, in order to assist in decision making. However, privacy policies protecting users' rights prevent these highly personal data from being publicly available to a wider researcher audience. In this work, we propose a memory biased random walk model on multilayer sequence network, as a generator of synthetic sequential data for recommender systems. We demonstrate the applicability of the synthetic data in training recommender system models for cases when privacy policies restrict clickstream publishing.Comment: The new updated version of the pape

    Learning Privacy Preserving Encodings through Adversarial Training

    Full text link
    We present a framework to learn privacy-preserving encodings of images that inhibit inference of chosen private attributes, while allowing recovery of other desirable information. Rather than simply inhibiting a given fixed pre-trained estimator, our goal is that an estimator be unable to learn to accurately predict the private attributes even with knowledge of the encoding function. We use a natural adversarial optimization-based formulation for this---training the encoding function against a classifier for the private attribute, with both modeled as deep neural networks. The key contribution of our work is a stable and convergent optimization approach that is successful at learning an encoder with our desired properties---maintaining utility while inhibiting inference of private attributes, not just within the adversarial optimization, but also by classifiers that are trained after the encoder is fixed. We adopt a rigorous experimental protocol for verification wherein classifiers are trained exhaustively till saturation on the fixed encoders. We evaluate our approach on tasks of real-world complexity---learning high-dimensional encodings that inhibit detection of different scene categories---and find that it yields encoders that are resilient at maintaining privacy.Comment: To appear in WACV 201

    CryptoMaze: Atomic Off-Chain Payments in Payment Channel Network

    Get PDF
    Payment protocols developed to realize off-chain transactions in Payment channel network (PCN) assumes the underlying routing algorithm transfers the payment via a single path. However, a path may not have sufficient capacity to route a transaction. It is inevitable to split the payment across multiple paths. If we run independent instances of the protocol on each path, the execution may fail in some of the paths, leading to partial transfer of funds. A payer has to reattempt the entire process for the residual amount. We propose a secure and privacy-preserving payment protocol, CryptoMaze. Instead of independent paths, the funds are transferred from sender to receiver across several payment channels responsible for routing, in a breadth-first fashion. Payments are resolved faster at reduced setup cost, compared to existing state-of-the-art. Correlation among the partial payments is captured, guaranteeing atomicity. Further, two party ECDSA signature is used for establishing scriptless locks among parties involved in the payment. It reduces space overhead by leveraging on core Bitcoin scripts. We provide a formal model in the Universal Composability framework and state the privacy goals achieved by CryptoMaze. We compare the performance of our protocol with the existing single path based payment protocol, Multi-hop HTLC, applied iteratively on one path at a time on several instances. It is observed that CryptoMaze requires less communication overhead and low execution time, demonstrating efficiency and scalability.Comment: 30 pages, 9 figures, 1 tabl

    How Far Removed Are You? Scalable Privacy-Preserving Estimation of Social Path Length with Social PaL

    Get PDF
    Social relationships are a natural basis on which humans make trust decisions. Online Social Networks (OSNs) are increasingly often used to let users base trust decisions on the existence and the strength of social relationships. While most OSNs allow users to discover the length of the social path to other users, they do so in a centralized way, thus requiring them to rely on the service provider and reveal their interest in each other. This paper presents Social PaL, a system supporting the privacy-preserving discovery of arbitrary-length social paths between any two social network users. We overcome the bootstrapping problem encountered in all related prior work, demonstrating that Social PaL allows its users to find all paths of length two and to discover a significant fraction of longer paths, even when only a small fraction of OSN users is in the Social PaL system - e.g., discovering 70% of all paths with only 40% of the users. We implement Social PaL using a scalable server-side architecture and a modular Android client library, allowing developers to seamlessly integrate it into their apps.Comment: A preliminary version of this paper appears in ACM WiSec 2015. This is the full versio

    Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers

    Full text link
    Machine Learning (ML) algorithms are used to train computers to perform a variety of complex tasks and improve with experience. Computers learn how to recognize patterns, make unintended decisions, or react to a dynamic environment. Certain trained machines may be more effective than others because they are based on more suitable ML algorithms or because they were trained through superior training sets. Although ML algorithms are known and publicly released, training sets may not be reasonably ascertainable and, indeed, may be guarded as trade secrets. While much research has been performed about the privacy of the elements of training sets, in this paper we focus our attention on ML classifiers and on the statistical information that can be unconsciously or maliciously revealed from them. We show that it is possible to infer unexpected but useful information from ML classifiers. In particular, we build a novel meta-classifier and train it to hack other classifiers, obtaining meaningful information about their training sets. This kind of information leakage can be exploited, for example, by a vendor to build more effective classifiers or to simply acquire trade secrets from a competitor's apparatus, potentially violating its intellectual property rights
    • …
    corecore