6 research outputs found

    A Decentralized Information Marketplace Preserving Input and Output Privacy

    Get PDF
    Data-driven applications are engines of economic growth and essential for progress in many domains. The data involved is often of a personal nature. We propose a decentralized information marketplace where data held by data providers, such as individual users can be made available for computation to data consumers, such as government agencies, research institutes, or companies who want to derive actionable insights or train machine learning models with the data while (1) protecting input privacy, (2) protecting output privacy, and (3) compensating data providers for making their sensitive information available for secure computation. We enable this privacy-preserving data exchange through a novel and carefully designed combination of a blockchain that supports smart contracts and two privacy-enhancing technologies: (1) secure multi-party computations, and (2) robust differential privacy guarantees.</p

    An End-to-End Framework for Private DGA Detection as a Service

    Get PDF
    Domain Generation Algorithms (DGAs) are used by malware to generate pseudorandom domain names to establish communication between infected bots and Command and Control servers. While DGAs can be detected by machine learning (ML) models with great accuracy, offering DGA detection as a service raises privacy concerns when requiring network administrators to disclose their DNS traffic to the service provider. We propose the first end-to-end framework for privacy-preserving classification as a service of domain names into DGA (malicious) or non-DGA (benign) domains. We achieve this through a combination of two privacy-enhancing technologies (PETs), namely secure multi-party computation (MPC) and differential privacy (DP). Through MPC, our framework enables an enterprise network administrator to outsource the problem of classifying a DNS domain as DGA or non-DGA to an external organization without revealing any information about the domain name. Moreover, the service provider\u27s ML model used for DGA detection is never revealed to the network administrator. Furthermore, by using DP, we also ensure that the classification result cannot be used to learn information about individual entries of the training data. Finally, we leverage the benefits of quantization of deep learning models in the context of MPC to achieve efficient, secure DGA detection. We demonstrate that we achieve a significant speed-up resulting in a 15% reduction in detection runtime without reducing accuracy

    Privacy-Preserving Video Classification with Convolutional Neural Networks

    No full text
    Thesis (Master's)--University of Washington, 2020Video classification using deep learning is widely used in many applications such as facial recognition, gesture analysis, activity recognition, emotion analysis, etc. Many applications capture user videos for classification purposes. This raises concerns regarding privacy and potential misuse of videos. As videos are available on the internet or with the service providers, it is possible to misuse them such as to generate fake videos or to mine information from the videos that goes beyond the professed scope of the original application or service. The service provider on the other hand typically cannot provide the trained video classification model to be run on the client's side either, due to resource constraints, proprietary concerns, and the risk for adversarial attacks. There is a need for technology to perform video classification in a privacy-preserving manner, i.e. such that the client does not have to share their videos with anyone without encryption, and the service provider does not have to show their model parameters in plaintext. In this thesis, we propose privacy-preserving single frame-based video classification with a pre-trained convolutional neural network based on Secure MultiParty Computation. The pipeline for securely classifying a video involves three major protocols: oblivious selection of frames in a video, securely classifying individual frames in the video using existing protocols for image classification, and secure label aggregation across frames to obtain a single class label for the video. We perform run-time experiments for the use case of emotion detection in a video to demonstrate the feasibility of our proposed methods in practice

    Privacy-preserving video classification with convolutional neural networks

    No full text
    Many video classification applications require access to personal data, thereby posing an invasive security risk to the users’ privacy. We propose a privacy-preserving implementation of single-frame method based video classification with convolutional neural networks that allows a party to infer a label from a video without necessitating the video owner to disclose their video to other entities in an unencrypted manner. Similarly, our approach removes the requirement of the classifier owner from revealing their model parameters to outside entities in plaintext. To this end, we combine existing Secure Multi-Party Computation (MPC) protocols for private image classification with our novel MPC protocols for oblivious single-frame selection and secure label aggregation across frames. The result is an end-to-end privacy-preserving video classification pipeline. We evaluate our proposed solution in an application for private human emotion recognition. Our results across a variety of security settings, spanning honest and dishonest majority configurations of the computing parties, and for both passive and active adversaries, demonstrate that videos can be classified with state-of-the-art accuracy, and without leaking sensitive user information

    Privacy-preserving video classification with convolutional neural networks

    Get PDF
    Many video classification applications require access to personal data, thereby posing an invasive security risk to the users' privacy. We propose a privacy-preserving implementation of single-frame method based video classification with convolutional neural networks that allows a party to infer a label from a video without necessitating the video owner to disclose their video to other entities in an unencrypted manner. Similarly, our approach removes the requirement of the classifier owner from revealing their model parameters to outside entities in plaintext. To this end, we combine existing Secure Multi-Party Computation (MPC) protocols for private image classification with our novel MPC protocols for oblivious single-frame selection and secure label aggregation across frames. The result is an end-to-end privacy-preserving video classification pipeline. We evaluate our proposed solution in an application for private human emotion recognition. Our results across a variety of security settings, spanning honest and dishonest majority configurations of the computing parties, and for both passive and active adversaries, demonstrate that videos can be classified with state-of-the-art accuracy, and without leaking sensitive user information

    Training Differentially Private Models with Secure Multiparty Computation

    Get PDF
    We address the problem of learning a machine learning model from training data that originates at multiple data owners while providing formal privacy guarantees regarding the protection of each owner's data. Existing solutions based on Differential Privacy (DP) achieve this at the cost of a drop in accuracy. Solutions based on Secure Multiparty Computation (MPC) do not incur such accuracy loss but leak information when the trained model is made publicly available. We propose an MPC solution for training DP models. Our solution relies on an MPC protocol for model training, and an MPC protocol for perturbing the trained model coefficients with Laplace noise in a privacy-preserving manner. The resulting MPC+DP approach achieves higher accuracy than a pure DP approach while providing the same formal privacy guarantees. Our work obtained first place in the iDASH2021 Track III competition on confidential computing for secure genome analysis
    corecore