969 research outputs found

    Towards the AlexNet Moment for Homomorphic Encryption: HCNN, theFirst Homomorphic CNN on Encrypted Data with GPUs

    Get PDF
    Deep Learning as a Service (DLaaS) stands as a promising solution for cloud-based inference applications. In this setting, the cloud has a pre-learned model whereas the user has samples on which she wants to run the model. The biggest concern with DLaaS is user privacy if the input samples are sensitive data. We provide here an efficient privacy-preserving system by employing high-end technologies such as Fully Homomorphic Encryption (FHE), Convolutional Neural Networks (CNNs) and Graphics Processing Units (GPUs). FHE, with its widely-known feature of computing on encrypted data, empowers a wide range of privacy-concerned applications. This comes at high cost as it requires enormous computing power. In this paper, we show how to accelerate the performance of running CNNs on encrypted data with GPUs. We evaluated two CNNs to classify homomorphically the MNIST and CIFAR-10 datasets. Our solution achieved a sufficient security level (> 80 bit) and reasonable classification accuracy (99%) and (77.55%) for MNIST and CIFAR-10, respectively. In terms of latency, we could classify an image in 5.16 seconds and 304.43 seconds for MNIST and CIFAR-10, respectively. Our system can also classify a batch of images (> 8,000) without extra overhead

    SoK: Privacy Preserving Machine Learning using Functional Encryption: Opportunities and Challenges

    Full text link
    With the advent of functional encryption, new possibilities for computation on encrypted data have arisen. Functional Encryption enables data owners to grant third-party access to perform specified computations without disclosing their inputs. It also provides computation results in plain, unlike Fully Homomorphic Encryption. The ubiquitousness of machine learning has led to the collection of massive private data in the cloud computing environment. This raises potential privacy issues and the need for more private and secure computing solutions. Numerous efforts have been made in privacy-preserving machine learning (PPML) to address security and privacy concerns. There are approaches based on fully homomorphic encryption (FHE), secure multiparty computation (SMC), and, more recently, functional encryption (FE). However, FE-based PPML is still in its infancy and has not yet gotten much attention compared to FHE-based PPML approaches. In this paper, we provide a systematization of PPML works based on FE summarizing state-of-the-art in the literature. We focus on Inner-product-FE and Quadratic-FE-based machine learning models for the PPML applications. We analyze the performance and usability of the available FE libraries and their applications to PPML. We also discuss potential directions for FE-based PPML approaches. To the best of our knowledge, this is the first work to systematize FE-based PPML approaches

    Systematizing Genome Privacy Research: A Privacy-Enhancing Technologies Perspective

    Full text link
    Rapid advances in human genomics are enabling researchers to gain a better understanding of the role of the genome in our health and well-being, stimulating hope for more effective and cost efficient healthcare. However, this also prompts a number of security and privacy concerns stemming from the distinctive characteristics of genomic data. To address them, a new research community has emerged and produced a large number of publications and initiatives. In this paper, we rely on a structured methodology to contextualize and provide a critical analysis of the current knowledge on privacy-enhancing technologies used for testing, storing, and sharing genomic data, using a representative sample of the work published in the past decade. We identify and discuss limitations, technical challenges, and issues faced by the community, focusing in particular on those that are inherently tied to the nature of the problem and are harder for the community alone to address. Finally, we report on the importance and difficulty of the identified challenges based on an online survey of genome data privacy expertsComment: To appear in the Proceedings on Privacy Enhancing Technologies (PoPETs), Vol. 2019, Issue

    BFV-Based Homomorphic Encryption for Privacy-Preserving CNN Models

    Get PDF
    Medical data is frequently quite sensitive in terms of data privacy and security. Federated learning has been used to increase the privacy and security of medical data, which is a sort of machine learning technique. The training data is disseminated across numerous machines in federated learning, and the learning process is collaborative. There are numerous privacy attacks on deep learning (DL) models that attackers can use to obtain sensitive information. As a result, the DL model should be safeguarded from adversarial attacks, particularly in medical data applications. Homomorphic encryption-based model security from the adversarial collaborator is one of the answers to this challenge. Using homomorphic encryption, this research presents a privacy-preserving federated learning system for medical data. The proposed technique employs a secure multi-party computation protocol to safeguard the deep learning model from adversaries. The proposed approach is tested in terms of model performance using a real-world medical dataset in this paper

    Federated Learning for Protecting Medical Data Privacy

    Get PDF
    Deep learning is one of the most advanced machine learning techniques, and its prominence has increased in recent years. Language processing, predictions in medical research and pattern recognition are few of the numerous fields in which it is widely utilized. Numerous modern medical applications benefit greatly from the implementation of machine learning (ML) models and the disruptive innovations in the entire modern health care system. It is extensively used for constructing accurate and robust statistical models from large volumes of medical data collected from a variety of sources in contemporary healthcare systems [1]. Due to privacy concerns that restrict access to medical data, these Deep learning techniques have yet to completely exploit medical data despite their immense potential benefits. Many data proprietors are unable to benefit from large-scale deep learning due to privacy and confidentiality concerns associated with data sharing. However, without access to sufficient data, Deep Learning will not be able to realize its maximum potential when transitioning from the research phase to clinical practice [2]. This project addresses this problem by implementing Federated Learning and Encrypted Computations on text data, such as Multi Party Computation. SyferText, a Python library for privacy-protected Natural Language Processing that leverages PySyft to conduct Federated Learning, is used in this context

    Secure Federated Learning with a Homomorphic Encryption Model

    Get PDF
    Federated learning (FL) offers collaborative machine learning across decentralized devices while safeguarding data privacy. However, data security and privacy remain key concerns. This paper introduces "Secure Federated Learning with a Homomorphic Encryption Model," addressing these challenges by integrating homomorphic encryption into FL. The model starts by initializing a global machine learning model and generating a homomorphic encryption key pair, with the public key shared among FL participants. Using this public key, participants then collect, preprocess, and encrypt their local data. During FL Training Rounds, participants decrypt the global model, compute local updates on encrypted data, encrypt these updates, and securely send them to the aggregator. The aggregator homomorphic ally combines updates without revealing participant data, forwarding the encrypted aggregated update to the global model owner. The Global Model Update ensures the owner decrypts the aggregated update using the private key, updates the global model, encrypts it with the public key, and shares the encrypted global model with FL participants. With optional model evaluation, training can iterate for several rounds or until convergence. This model offers a robust solution to Florida data privacy and security issues, with versatile applications across domains. This paper presents core model components, advantages, and potential domain-specific implementations while making significant strides in addressing FL's data privacy concerns
    • …
    corecore