260 research outputs found

    Confidential Boosting with Random Linear Classifiers for Outsourced User-generated Data

    Full text link
    User-generated data is crucial to predictive modeling in many applications. With a web/mobile/wearable interface, a data owner can continuously record data generated by distributed users and build various predictive models from the data to improve their operations, services, and revenue. Due to the large size and evolving nature of users data, data owners may rely on public cloud service providers (Cloud) for storage and computation scalability. Exposing sensitive user-generated data and advanced analytic models to Cloud raises privacy concerns. We present a confidential learning framework, SecureBoost, for data owners that want to learn predictive models from aggregated user-generated data but offload the storage and computational burden to Cloud without having to worry about protecting the sensitive data. SecureBoost allows users to submit encrypted or randomly masked data to designated Cloud directly. Our framework utilizes random linear classifiers (RLCs) as the base classifiers in the boosting framework to dramatically simplify the design of the proposed confidential boosting protocols, yet still preserve the model quality. A Cryptographic Service Provider (CSP) is used to assist the Cloud's processing, reducing the complexity of the protocol constructions. We present two constructions of SecureBoost: HE+GC and SecSh+GC, using combinations of homomorphic encryption, garbled circuits, and random masking to achieve both security and efficiency. For a boosted model, Cloud learns only the RLCs and the CSP learns only the weights of the RLCs. Finally, the data owner collects the two parts to get the complete model. We conduct extensive experiments to understand the quality of the RLC-based boosting and the cost distribution of the constructions. Our results show that SecureBoost can efficiently learn high-quality boosting models from protected user-generated data

    Privacy-preserving Distributed Analytics: Addressing the Privacy-Utility Tradeoff Using Homomorphic Encryption for Peer-to-Peer Analytics

    Get PDF
    Data is becoming increasingly valuable, but concerns over its security and privacy have limited its utility in analytics. Researchers and practitioners are constantly facing a privacy-utility tradeoff where addressing the former is often at the cost of the data utility and accuracy. In this paper, we draw upon mathematical properties of partially homomorphic encryption, a form of asymmetric key encryption scheme, to transform raw data from multiple sources into secure, yet structure-preserving encrypted data for use in statistical models, without loss of accuracy. We contribute to the literature by: i) proposing a method for secure and privacy-preserving analytics and illustrating its utility by implementing a secure and privacy-preserving version of Maximum Likelihood Estimator, “s-MLE”, and ii) developing a web-based framework for privacy-preserving peer-to-peer analytics with distributed datasets. Our study has widespread applications in sundry industries including healthcare, finance, e-commerce etc., and has multi-faceted implications for academics, businesses, and governments

    Exploring Privacy-Preserving Disease Diagnosis: A Comparative Analysis

    Get PDF
    In the healthcare sector, data is considered as a valuable asset, with enormous amounts generated in the form of patient records and disease-related information. Leveraging machine learning techniques enables the analysis of extensive datasets, unveiling hidden patterns in diseases, facilitating personalized treatments, and forecasting potential health issues. However, the flourish of online diagnosis and prediction still faces some challenges related to information security and privacy as disease diagnosis technologies utilizes a lot of clinical records and sensitive patient data. Hence, it becomes imperative to prioritize the development of innovative methodologies that not only advance the accuracy and efficiency of disease prediction but also ensure the highest standards of privacy protection. This requires collaborative efforts between researchers, healthcare practitioners, and policymakers to establish a comprehensive framework that addresses the evolving landscape of healthcare data while safeguarding individual privacy. Addressing this constraint, numerous researchers integrate privacy preservation measures with disease prediction techniques to develop a system capable of diagnosing diseases without compromising the confidentiality of sensitive information. The survey paper conducts a comparative analysis of privacy-preserving techniques employed in disease diagnosis and prediction. It explores existing methodologies across various domains, assessing their efficacy and trade-offs in maintaining data confidentiality while optimizing diagnostic accuracy. The review highlights the need for robust privacy measures in disease prediction, shortcomings related to existing techniques of privacy preserving disease diagnosis, and provides insights into promising directions for future research in this critical intersection of healthcare and privacy preservation

    Privacy-Preserving Chaotic Extreme Learning Machine with Fully Homomorphic Encryption

    Full text link
    The Machine Learning and Deep Learning Models require a lot of data for the training process, and in some scenarios, there might be some sensitive data, such as customer information involved, which the organizations might be hesitant to outsource for model building. Some of the privacy-preserving techniques such as Differential Privacy, Homomorphic Encryption, and Secure Multi-Party Computation can be integrated with different Machine Learning and Deep Learning algorithms to provide security to the data as well as the model. In this paper, we propose a Chaotic Extreme Learning Machine and its encrypted form using Fully Homomorphic Encryption where the weights and biases are generated using a logistic map instead of uniform distribution. Our proposed method has performed either better or similar to the Traditional Extreme Learning Machine on most of the datasets.Comment: 26 pages; 1 Figure; 7 Tables. arXiv admin note: text overlap with arXiv:2205.1326

    Secure Outsourced Computation on Encrypted Data

    Get PDF
    Homomorphic encryption (HE) is a promising cryptographic technique that supports computations on encrypted data without requiring decryption first. This ability allows sensitive data, such as genomic, financial, or location data, to be outsourced for evaluation in a resourceful third-party such as the cloud without compromising data privacy. Basic homomorphic primitives support addition and multiplication on ciphertexts. These primitives can be utilized to represent essential computations, such as logic gates, which subsequently can support more complex functions. We propose the construction of efficient cryptographic protocols as building blocks (e.g., equality, comparison, and counting) that are commonly used in data analytics and machine learning. We explore the use of these building blocks in two privacy-preserving applications. One application leverages our secure prefix matching algorithm, which builds on top of the equality operation, to process geospatial queries on encrypted locations. The other applies our secure comparison protocol to perform conditional branching in private evaluation of decision trees. There are many outsourced computations that require joint evaluation on private data owned by multiple parties. For example, Genome-Wide Association Study (GWAS) is becoming feasible because of the recent advances of genome sequencing technology. Due to the sensitivity of genomic data, this data is encrypted using different keys possessed by different data owners. Computing on ciphertexts encrypted with multiple keys is a non-trivial task. Current solutions often require a joint key setup before any computation such as in threshold HE or incur large ciphertext size (at best, grows linearly in the number of involved keys) such as in multi-key HE. We propose a hybrid approach that combines the advantages of threshold and multi-key HE to support computations on ciphertexts encrypted with different keys while vastly reducing ciphertext size. Moreover, we propose the SparkFHE framework to support large-scale secure data analytics in the Cloud. SparkFHE integrates Apache Spark with Fully HE to support secure distributed data analytics and machine learning and make two novel contributions: (1) enabling Spark to perform efficient computation on large datasets while preserving user privacy, and (2) accelerating intensive homomorphic computation through parallelization of tasks across clusters of computing nodes. To our best knowledge, SparkFHE is the first addressing these two needs simultaneously

    Confidential Machine Learning on Untrusted Platforms: a Survey

    Get PDF
    With the ever-growing data and the need for developing powerful machine learning models, data owners increasingly depend on various untrusted platforms (e.g., public clouds, edges, and machine learning service providers) for scalable processing or collaborative learning. Thus, sensitive data and models are in danger of unauthorized access, misuse, and privacy compromises. A relatively new body of research confidentially trains machine learning models on protected data to address these concerns. In this survey, we summarize notable studies in this emerging area of research. With a unified framework, we highlight the critical challenges and innovations in outsourcing machine learning confidentially. We focus on the cryptographic approaches for confidential machine learning (CML), primarily on model training, while also covering other directions such as perturbation-based approaches and CML in the hardware-assisted computing environment. The discussion will take a holistic way to consider a rich context of the related threat models, security assumptions, design principles, and associated trade-offs amongst data utility, cost, and confidentiality
    corecore