190 research outputs found

    Applications in security and evasions in machine learning : a survey

    Get PDF
    In recent years, machine learning (ML) has become an important part to yield security and privacy in various applications. ML is used to address serious issues such as real-time attack detection, data leakage vulnerability assessments and many more. ML extensively supports the demanding requirements of the current scenario of security and privacy across a range of areas such as real-time decision-making, big data processing, reduced cycle time for learning, cost-efficiency and error-free processing. Therefore, in this paper, we review the state of the art approaches where ML is applicable more effectively to fulfill current real-world requirements in security. We examine different security applications' perspectives where ML models play an essential role and compare, with different possible dimensions, their accuracy results. By analyzing ML algorithms in security application it provides a blueprint for an interdisciplinary research area. Even with the use of current sophisticated technology and tools, attackers can evade the ML models by committing adversarial attacks. Therefore, requirements rise to assess the vulnerability in the ML models to cope up with the adversarial attacks at the time of development. Accordingly, as a supplement to this point, we also analyze the different types of adversarial attacks on the ML models. To give proper visualization of security properties, we have represented the threat model and defense strategies against adversarial attack methods. Moreover, we illustrate the adversarial attacks based on the attackers' knowledge about the model and addressed the point of the model at which possible attacks may be committed. Finally, we also investigate different types of properties of the adversarial attacks

    M-SSE: an effective searchable symmetric encryption with enhanced security for mobile devices

    Get PDF
    Searchable Encryption (SE) allows mobile devices with limited computing and storage resources to outsource data to an untrusted cloud server. Users are able to search and retrieve the outsourced, however, it suffers from information and privacy leakage. The reason is that most of the previous works rely on the single cloud model, which allows that the cloud server get all the search information from users. In this paper, we present a new scheme M-SSE that achieves both forward and backward security based on a multi-cloud technique. The new scheme is secure against both adaptive file injection attack and size pattern attack by utilizing multiple cloud servers. Experiment results show that our scheme is effective compared with the other existing schemes

    A Practical Cross-Device Federated Learning Framework over 5G Networks

    Get PDF
    The concept of federated learning (FL) was first proposed by Google in 2016. Thereafter, FL has been widely studied for the feasibility of application in various fields due to its potential to make full use of data without compromising the privacy. However, limited by the capacity of wireless data transmission, the employment of federated learning on mobile devices has been making slow progress in practical. The development and commercialization of the 5th generation (5G) mobile networks has shed some light on this. In this paper, we analyze the challenges of existing federated learning schemes for mobile devices and propose a novel cross-device federated learning framework, which utilizes the anonymous communication technology and ring signature to protect the privacy of participants while reducing the computation overhead of mobile devices participating in FL. In addition, our scheme implements a contribution-based incentive mechanism to encourage mobile users to participate in FL. We also give a case study of autonomous driving. Finally, we present the performance evaluation of the proposed scheme and discuss some open issues in federated learning.Comment: This paper has been accepted by IEEE Wireless Communication

    A survey of machine and deep learning methods for privacy protection in the Internet of things

    Get PDF
    Recent advances in hardware and information technology have accelerated the proliferation of smart and interconnected devices facilitating the rapid development of the Internet of Things (IoT). IoT applications and services are widely adopted in environments such as smart cities, smart industry, autonomous vehicles, and eHealth. As such, IoT devices are ubiquitously connected, transferring sensitive and personal data without requiring human interaction. Consequently, it is crucial to preserve data privacy. This paper presents a comprehensive survey of recent Machine Learning (ML)- and Deep Learning (DL)-based solutions for privacy in IoT. First, we present an in depth analysis of current privacy threats and attacks. Then, for each ML architecture proposed, we present the implementations, details, and the published results. Finally, we identify the most effective solutions for the different threats and attacks.This work is partially supported by the Generalitat de Catalunya under grant 2017 SGR 962 and the HORIZON-GPHOENIX (101070586) and HORIZON-EUVITAMIN-V (101093062) projects.Peer ReviewedPostprint (published version

    The High-Level Practical Overview of Open-Source Privacy-Preserving Machine Learning Solutions

    Get PDF
    This paper aims to provide a high-level overview of practical approaches to machine-learning respecting the privacy and confidentiality of customer information, which is called Privacy-Preserving Machine Learning. First, the security approaches in offline-learning privacy methods are assessed. Those focused on modern cryptographic methods, such as Homomorphic Encryption and Secure Multi-Party Computation, as well as on dedicated combined hardware and software platforms like Trusted Execution Environment - Intel® Software Guard Extensions (Intel® SGX). Combining the security approaches with different machine learning architectures leads to our Proof of Concept in which the accuracy and speed of the security solutions will be examined. The next step was exploring and comparing the Open-Source Python-based solutions for PPML. Four solutions were selected from almost 40 separate, state-of-the-art systems: SyMPC, TF-Encrypted, TenSEAL, and Gramine. Three different Neural Network architectures were designed to show different libraries’ capabilities. The POC solves the image classification problem based on the MNIST dataset. As the computational results show, the accuracy of all considered secure approaches is similar. The maximum difference between non-secure and secure flow does not exceed 1.2%. In terms of secure computations, the most effective Privacy-Preserving Machine Learning library is based on Trusted Execution Environment, followed by Secure Multi-Party Computation and Homomorphic Encryption. However, most of those are at least 1000 times slower than the non-secure evaluation. Unfortunately, it is not acceptable for a real-world scenario. Future work could combine different security approaches, explore other new and existing state-of-the-art libraries or implement support for hardware-accelerated secure computation

    The High-Level Practical Overview of Open-Source Privacy-Preserving Machine Learning Solutions

    Get PDF
    This paper aims to provide a high-level overview of practical approaches to machine-learning respecting the privacy and confidentiality of customer information, which is called Privacy-Preserving Machine Learning. First, the security approaches in offline-learning privacy methods are assessed. Those focused on modern cryptographic methods, such as Homomorphic Encryption and Secure Multi-Party Computation, as well as on dedicated combined hardware and software platforms like Trusted Execution Environment - Intel® Software Guard Extensions (Intel® SGX). Combining the security approaches with different machine learning architectures leads to our Proof of Concept in which the accuracy and speed of the security solutions will be examined. The next step was exploring and comparing the Open-Source Python-based solutions for PPML. Four solutions were selected from almost 40 separate, state-of-the-art systems: SyMPC, TF-Encrypted, TenSEAL, and Gramine. Three different Neural Network architectures were designed to show different libraries’ capabilities. The POC solves the image classification problem based on the MNIST dataset. As the computational results show, the accuracy of all considered secure approaches is similar. The maximum difference between non-secure and secure flow does not exceed 1.2%. In terms of secure computations, the most effective Privacy-Preserving Machine Learning library is based on Trusted Execution Environment, followed by Secure Multi-Party Computation and Homomorphic Encryption. However, most of those are at least 1000 times slower than the non-secure evaluation. Unfortunately, it is not acceptable for a real-world scenario. Future work could combine different security approaches, explore other new and existing state-of-the-art libraries or implement support for hardware-accelerated secure computation

    Advancements in privacy enhancing technologies for machine learning

    Get PDF
    The field of privacy preserving machine learning is still in its infancy and has been growing in popularity since 2019. Privacy enhancing technologies within the context of machine learning are composed of a set of core techniques. These relate to cryptography, distributed computation- or federated learning- differential privacy, and methods for managing distributed identity. Furthermore, the notion of contextual integrity exists to quantify the appropriate flow of information. The aim of this work is to advance a vision of a privacy compatible infrastructure, where web 3.0 exists as a decentralised infrastructure, enshrines the user’s right to privacy and consent over information concerning them on the Internet. This thesis contains a mix of experiments relating to privacy enhancing technologies in the context of machine learning. A number of privacy enhancing methods are advanced in these experiments, and a novel privacy preserving flow is created. This includes the establishment of an open-source framework for vertically distributed federated learning and the advancement of a novel privacy preserving machine learning framework which accommodates a core set of privacy enhancing technologies. Along with this, the work advances a novel means of describing privacy preserving information flows which extends the definition of contextual integrity. This thesis establishes a range of contributions to the advancement of privacy enhancing technologies for privacy preserving machine learning. A case study is evaluated, and a novel, heterogeneous stack classifier is built which predicts the presence of insider threat and demonstrates the efficacy of machine learning in solving problems in this domain, given access to real data. It also draws conclusions about the applicability of federated learning in this use case. A novel framework is introduced that facilitates vertically distributed machine learning on data relating to the same subjects held on different hosts. Researchers can use this to achieve vertically federated learning in practice. The weaknesses in the security of the Split Neural Networks technique are discussed, and appropriate defences were explored in detail. These defences harden SplitNN against inversion attacks. A novel distributed trust framework is established which facilitated peer-to-peer access control without the need for a third party. This puts forward a solution for fully privacy preserving access control while interacting with privacy preserving machine learning infrastructure. Finally, a novel framework for the implementation of structured transparency is given. This provides a cohesive way to manage information flows in the privacy preserving machine learning and analytics space, offering a well-stocked toolkit for the implementation of structured transparency which utilises the aforementioned technologies. This also exhibits homomorphically encrypted inference which fully hardens the SplitNN methodology against model inversion attacks. The most significant finding in this work is the production of an information flow which combines; split neural networks, homomorphic encryption, zero-knowledge access control and elements of differential privacy. This flow facilitates homomorphic inference through split neural networks, advancing the state-of-the-art with regard to privacy preserving machine learning

    Towards Private Deep Learning-based Side-Channel Analysis using Homomorphic Encryption

    Get PDF
    Side-channel analysis certification is a process designed to certify the resilience of cryptographic hardware and software implementations against side-channel attacks. In certain cases, third-party evaluations by external companies or departments are necessary due to limited budget, time, or even expertise with the penalty of a significant exchange of sensitive information during the evaluation process. In this work, we investigate the potential of Homomorphic Encryption (HE) in performing side-channel analysis on HE-encrypted measurements. With HE applied to side-channel analysis (SCA), a third party can perform SCA on encrypted measurement data and provide the outcome of the analysis without gaining insights about the actual cryptographic implementation under test. To this end, we evaluate its feasibility by analyzing the impact of AI-based side-channel analysis using HE (private SCA) on accuracy and execution time and compare the results with an ordinary AI-based side-channel analysis (plain SCA). Our work suggests that both unprotected and protected cryptographic implementations can be successfully attacked already today with standard server equipment and modern HE protocols/libraries, while the traces are HE-encrypted

    Machine learning for prognosis of oral cancer : What are the ethical challenges?

    Get PDF
    Background: Machine learning models have shown high performance, particularly in the diagnosis and prognosis of oral cancer. However, in actual everyday clinical practice, the diagnosis and prognosis using these models remain limited. This is due to the fact that these models have raised several ethical and morally laden dilemmas. Purpose: This study aims to provide a systematic stateof-the-art review of the ethical and social implications of machine learning models in oral cancer management. Methods: We searched the OvidMedline, PubMed, Scopus, Web of Science and Institute of Electrical and Electronics Engineers databases for articles examining the ethical issues of machine learning or artificial intelligence in medicine, healthcare or care providers. The Preferred Reporting Items for Systematic Review and Meta-Analysis was used in the searching and screening processes. Findings: A total of 33 studies examined the ethical challenges of machine learning models or artificial intelligence in medicine, healthcare or diagnostic analytics. Some ethical concerns were data privacy and confidentiality, peer disagreement (contradictory diagnostic or prognostic opinion between the model and the clinician), patient’s liberty to decide the type of treatment to follow may be violated, patients–clinicians’ relationship may change and the need for ethical and legal frameworks. Conclusion: Government, ethicists, clinicians, legal experts, patients’ representatives, data scientists and machine learning experts need to be involved in the development of internationally standardised and structured ethical review guidelines for the machine learning model to be beneficial in daily clinical practice.Copyright © 2020 for this paper by its authors. Use permitted under Creative CommonsLicense Attribution 4.0 International (CC BY 4.0).fi=vertaisarvioitu|en=peerReviewed
    • …
    corecore