46 research outputs found

    Privacy preserving distributed optimization using homomorphic encryption

    Full text link
    This paper studies how a system operator and a set of agents securely execute a distributed projected gradient-based algorithm. In particular, each participant holds a set of problem coefficients and/or states whose values are private to the data owner. The concerned problem raises two questions: how to securely compute given functions; and which functions should be computed in the first place. For the first question, by using the techniques of homomorphic encryption, we propose novel algorithms which can achieve secure multiparty computation with perfect correctness. For the second question, we identify a class of functions which can be securely computed. The correctness and computational efficiency of the proposed algorithms are verified by two case studies of power systems, one on a demand response problem and the other on an optimal power flow problem.Comment: 24 pages, 5 figures, journa

    Access Control in Publicly Verifiable Outsourced Computation

    Get PDF
    Publicly Verifiable Outsourced Computation (PVC) allows devices with restricted re-sources to delegate expensive computations to more powerful external servers, and to verify the correctness of results. Whilst highlybeneficial in many situations, this increases the visi-bility and availability of potentially sensitive data, so we may wish to limit the sets of entities that can view input data and results. Additionally, it is highly unlikely that all users have identical and uncontrolled access to all functionality within an organization. Thus there is a need for access control mechanisms in PVC environments. In this work, we define a new framework for Publicly Verifiable Outsourced Computation with Access Control (PVC-AC). We formally define algorithms to provide different PVC functionality for each entity within a large outsourced computation environment, and discuss the forms of access control policies that are applicable, and necessary, in such environments, as well as formally modelling the resulting security properties. Finally, we give an example instantiation that (in a black-box and generic fashion) combines existing PVC schemes with symmetric Key Assignment Schemes to cryptographically enforce the policies of interest.

    Practical Isolated Searchable Encryption in a Trusted Computing Environment

    Get PDF
    Cloud computing has become a standard computational paradigm due its numerous advantages, including high availability, elasticity, and ubiquity. Both individual users and companies are adopting more of its services, but not without loss of privacy and control. Outsourcing data and computations to a remote server implies trusting its owners, a problem many end-users are aware. Recent news have proven data stored on Cloud servers is susceptible to leaks from the provider, third-party attackers, or even from government surveillance programs, exposing users’ private data. Different approaches to tackle these problems have surfaced throughout the years. Naïve solutions involve storing data encrypted on the server, decrypting it only on the client-side. Yet, this imposes a high overhead on the client, rendering such schemes impractical. Searchable Symmetric Encryption (SSE) has emerged as a novel research topic in recent years, allowing efficient querying and updating over encrypted datastores in Cloud servers, while retaining privacy guarantees. Still, despite relevant recent advances, existing SSE schemes still make a critical trade-off between efficiency, security, and query expressiveness, thus limiting their adoption as a viable technology, particularly in large-scale scenarios. New technologies providing Isolated Execution Environments (IEEs) may help improve SSE literature. These technologies allow applications to be run remotely with privacy guarantees, in isolation from other, possibly privileged, processes inside the CPU, such as the operating system kernel. Prominent example technologies are Intel SGX and ARM TrustZone, which are being made available in today’s commodity CPUs. In this thesis we study these new trusted hardware technologies in depth, while exploring their application to the problem of searching over encrypted data, primarily focusing in SGX. In more detail, we study the application of IEEs in SSE schemes, improving their efficiency, security, and query expressiveness. We design, implement, and evaluate three new SSE schemes for different query types, namely Boolean queries over text, similarity queries over image datastores, and multimodal queries over text and images. These schemes can support queries combining different media formats simultaneously, envisaging applications such as privacy-enhanced medical diagnosis and management of electronic-healthcare records, or confidential photograph catalogues, running without the danger of privacy breaks in Cloud-based provisioned services

    Private-Key Fully Homomorphic Encryption for Private Classification of Medical Data

    Full text link
    A wealth of medical data is inaccessible to researchers and clinicians due to privacy restrictions such as HIPAA. Clinicians would benefit from access to predictive models for diagnosis, such as classification of tumors as malignant or benign, without compromising patients’ privacy. In addition, the medical institutions and companies who own these medical information systems wish to keep their models private when used by outside parties. Fully homomorphic encryption (FHE) enables practical polynomial computation over encrypted data. This dissertation begins with coverage of speed and security improvements to existing private-key fully homomorphic encryption methods. Next this dissertation presents a protocol for third-party private search using private-key FHE. Finally, fully homomorphic protocols for polynomial machine learning algorithms are presented using privacy-preserving Naive Bayes and Decision Tree classifiers. These protocols allow clients to privately classify their data points without direct access to the learned model. Experiments using these classifiers are run using publicly available medical data sets. These protocols are applied to the task of privacy-preserving classification of real-world medical data. Results show that private-key fully homomorphic encryption is able to provide fast and accurate results for privacy-preserving medical classification

    Privacy-preserving machine learning system at the edge

    Get PDF
    Data privacy in machine learning has become an urgent problem to be solved, along with machine learning's rapid development and the large attack surface being explored. Pre-trained deep neural networks are increasingly deployed in smartphones and other edge devices for a variety of applications, leading to potential disclosures of private information. In collaborative learning, participants keep private data locally and communicate deep neural networks updated on their local data, but still, the private information encoded in the networks' gradients can be explored by adversaries. This dissertation aims to perform dedicated investigations on privacy leakage from neural networks and to propose privacy-preserving machine learning systems for edge devices. Firstly, the systematization of knowledge is conducted to identify the key challenges and existing/adaptable solutions. Then a framework is proposed to measure the amount of sensitive information memorized in each layer's weights of a neural network based on the generalization error. Results show that, when considered individually, the last layers encode a larger amount of information from the training data compared to the first layers. To protect such sensitive information in weights, DarkneTZ is proposed as a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against neural networks. The performance of DarkneTZ is evaluated, including CPU execution time, memory usage, and accurate power consumption, using two small and six large image classification models. Due to the limited memory of the edge device's TEE, model layers are partitioned into more sensitive layers (to be executed inside the device TEE), and a set of layers to be executed in the untrusted part of the operating system. Results show that even if a single layer is hidden, one can provide reliable model privacy and defend against state of art membership inference attacks, with only a 3% performance overhead. This thesis further strengthens investigations from neural network weights (in on-device machine learning deployment) to gradients (in collaborative learning). An information-theoretical framework is proposed, by adapting usable information theory and considering the attack outcome as a probability measure, to quantify private information leakage from network gradients. The private original information and latent information are localized in a layer-wise manner. After that, this work performs sensitivity analysis over the gradients \wrt~private information to further explore the underlying cause of information leakage. Numerical evaluations are conducted on six benchmark datasets and four well-known networks and further measure the impact of training hyper-parameters and defense mechanisms. Last but not least, to limit the privacy leakages in gradients, I propose and implement a Privacy-preserving Federated Learning (PPFL) framework for mobile systems. TEEs are utilized on clients for local training, and on servers for secure aggregation, so that model/gradient updates are hidden from adversaries. This work leverages greedy layer-wise training to train each model's layer inside the trusted area until its convergence. The performance evaluation of the implementation shows that PPFL significantly improves privacy by defending against data reconstruction, property inference, and membership inference attacks while incurring small communication overhead and client-side system overheads. This thesis offers a better understanding of the sources of private information in machine learning and provides frameworks to fully guarantee privacy and achieve comparable ML model utility and system overhead with regular machine learning framework.Open Acces

    Privacy preservation in Internet of Things: a secure approach for distributed group authentication through Paillier cryptosystem

    Get PDF
    Ho creato un applicativo in java per l'autenticazione distribuita di gruppo in ambienti con risorse limitate come Internet of things. L'applicativo è stato testato su una rete MANET da 2 a 5 nodi

    Theory and Practice of Cryptography and Network Security Protocols and Technologies

    Get PDF
    In an age of explosive worldwide growth of electronic data storage and communications, effective protection of information has become a critical requirement. When used in coordination with other tools for ensuring information security, cryptography in all of its applications, including data confidentiality, data integrity, and user authentication, is a most powerful tool for protecting information. This book presents a collection of research work in the field of cryptography. It discusses some of the critical challenges that are being faced by the current computing world and also describes some mechanisms to defend against these challenges. It is a valuable source of knowledge for researchers, engineers, graduate and doctoral students working in the field of cryptography. It will also be useful for faculty members of graduate schools and universities

    Robust Representation Learning for Privacy-Preserving Machine Learning: A Multi-Objective Autoencoder Approach

    Full text link
    Several domains increasingly rely on machine learning in their applications. The resulting heavy dependence on data has led to the emergence of various laws and regulations around data ethics and privacy and growing awareness of the need for privacy-preserving machine learning (ppML). Current ppML techniques utilize methods that are either purely based on cryptography, such as homomorphic encryption, or that introduce noise into the input, such as differential privacy. The main criticism given to those techniques is the fact that they either are too slow or they trade off a model s performance for improved confidentiality. To address this performance reduction, we aim to leverage robust representation learning as a way of encoding our data while optimizing the privacy-utility trade-off. Our method centers on training autoencoders in a multi-objective manner and then concatenating the latent and learned features from the encoding part as the encoded form of our data. Such a deep learning-powered encoding can then safely be sent to a third party for intensive training and hyperparameter tuning. With our proposed framework, we can share our data and use third party tools without being under the threat of revealing its original form. We empirically validate our results on unimodal and multimodal settings, the latter following a vertical splitting system and show improved performance over state-of-the-art
    corecore