1,717 research outputs found

    Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning

    Full text link
    Deep Learning has recently become hugely popular in machine learning, providing significant improvements in classification accuracy in the presence of highly-structured and large databases. Researchers have also considered privacy implications of deep learning. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users' private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy (DP) to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS'15. Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data). Interestingly, we show that record-level DP applied to the shared parameters of the model, as suggested in previous work, is ineffective (i.e., record-level DP is not designed to address our attack).Comment: ACM CCS'17, 16 pages, 18 figure

    On the security of NoSQL cloud database services

    Get PDF
    Processing a vast volume of data generated by web, mobile and Internet-enabled devices, necessitates a scalable and flexible data management system. Database-as-a-Service (DBaaS) is a new cloud computing paradigm, promising a cost-effective and scalable, fully-managed database functionality meeting the requirements of online data processing. Although DBaaS offers many benefits it also introduces new threats and vulnerabilities. While many traditional data processing threats remain, DBaaS introduces new challenges such as confidentiality violation and information leakage in the presence of privileged malicious insiders and adds new dimension to the data security. We address the problem of building a secure DBaaS for a public cloud infrastructure where, the Cloud Service Provider (CSP) is not completely trusted by the data owner. We present a high level description of several architectures combining modern cryptographic primitives for achieving this goal. A novel searchable security scheme is proposed to leverage secure query processing in presence of a malicious cloud insider without disclosing sensitive information. A holistic database security scheme comprised of data confidentiality and information leakage prevention is proposed in this dissertation. The main contributions of our work are: (i) A searchable security scheme for non-relational databases of the cloud DBaaS; (ii) Leakage minimization in the untrusted cloud. The analysis of experiments that employ a set of established cryptographic techniques to protect databases and minimize information leakage, proves that the performance of the proposed solution is bounded by communication cost rather than by the cryptographic computational effort

    Mitigating Insider Threat Risks in Cyber-physical Manufacturing Systems

    Get PDF
    Cyber-Physical Manufacturing System (CPMS)—a next generation manufacturing system—seamlessly integrates digital and physical domains via the internet or computer networks. It will enable drastic improvements in production flexibility, capacity, and cost-efficiency. However, enlarged connectivity and accessibility from the integration can yield unintended security concerns. The major concern arises from cyber-physical attacks, which can cause damages to the physical domain while attacks originate in the digital domain. Especially, such attacks can be performed by insiders easily but in a more critical manner: Insider Threats. Insiders can be defined as anyone who is or has been affiliated with a system. Insiders have knowledge and access authentications of the system\u27s properties, therefore, can perform more serious attacks than outsiders. Furthermore, it is hard to detect or prevent insider threats in CPMS in a timely manner, since they can easily bypass or incapacitate general defensive mechanisms of the system by exploiting their physical access, security clearance, and knowledge of the system vulnerabilities. This thesis seeks to address the above issues by developing an insider threat tolerant CPMS, enhanced by a service-oriented blockchain augmentation and conducting experiments & analysis. The aim of the research is to identify insider threat vulnerabilities and improve the security of CPMS. Blockchain\u27s unique distributed system approach is adopted to mitigate the insider threat risks in CPMS. However, the blockchain limits the system performance due to the arbitrary block generation time and block occurrence frequency. The service-oriented blockchain augmentation is providing physical and digital entities with the blockchain communication protocol through a service layer. In this way, multiple entities are integrated by the service layer, which enables the services with less arbitrary delays while retaining their strong security from the blockchain. Also, multiple independent service applications in the service layer can ensure the flexibility and productivity of the CPMS. To study the effectiveness of the blockchain augmentation against insider threats, two example models of the proposed system have been developed: Layer Image Auditing System (LIAS) and Secure Programmable Logic Controller (SPLC). Also, four case studies are designed and presented based on the two models and evaluated by an Insider Attack Scenario Assessment Framework. The framework investigates the system\u27s security vulnerabilities and practically evaluates the insider attack scenarios. The research contributes to the understanding of insider threats and blockchain implementations in CPMS by addressing key issues that have been identified in the literature. The issues are addressed by EBIS (Establish, Build, Identify, Simulation) validation process with numerical experiments and the results, which are in turn used towards mitigating insider threat risks in CPMS

    Predicting likelihood of legitimate data loss in email DLP

    Get PDF
    The volume and variety of data collected for modern organizations has increased significantly over the last decade necessitating the detection and prevention of disclosure of sensitive data. Data loss prevention is an embedded process used to protect against disclosure of sensitive data to external uncontrolled environments. A typical Data Loss Prevention (DLP) system uses custom policies to identify and prevent accidental and malicious data leakage producing large number of security alerts including significant volume of false positives. Consequently, identifying legitimate data loss can be very challenging as each incident comprises of different characteristics often requiring extensive intervention by a domain expert to review alerts individually. This limits the ability to detect data loss alerts in real-time making organisations vulnerable to financial and reputational damages. The aim of this research is to strengthen data loss detection capabilities of a DLP system by implementing a machine learning model to predict the likelihood of legitimate data loss. We conducted extensive experimentation using Decision Tree and Random Forest algorithms with historical email incident data collected by a globally established telecommunication enterprise. The final model produced with Random Forest algorithm was identified as the most effective as it was successfully able to predict approximately 95% data loss incidents accurately with an average true positive value of 90%. Furthermore, the proposed solution successfully enables identification of legitimate data loss in email DLP whilst facilitating prioritisation of real data loss through human-understandable explanation of the decision thereby improving the efficiency of the process

    A comprehensive survey of V2X cybersecurity mechanisms and future research paths

    Get PDF
    Recent advancements in vehicle-to-everything (V2X) communication have notably improved existing transport systems by enabling increased connectivity and driving autonomy levels. The remarkable benefits of V2X connectivity come inadvertently with challenges which involve security vulnerabilities and breaches. Addressing security concerns is essential for seamless and safe operation of mission-critical V2X use cases. This paper surveys current literature on V2X security and provides a systematic and comprehensive review of the most relevant security enhancements to date. An in-depth classification of V2X attacks is first performed according to key security and privacy requirements. Our methodology resumes with a taxonomy of security mechanisms based on their proactive/reactive defensive approach, which helps identify strengths and limitations of state-of-the-art countermeasures for V2X attacks. In addition, this paper delves into the potential of emerging security approaches leveraging artificial intelligence tools to meet security objectives. Promising data-driven solutions tailored to tackle security, privacy and trust issues are thoroughly discussed along with new threat vectors introduced inevitably by these enablers. The lessons learned from the detailed review of existing works are also compiled and highlighted. We conclude this survey with a structured synthesis of open challenges and future research directions to foster contributions in this prominent field.This work is supported by the H2020-INSPIRE-5Gplus project (under Grant agreement No. 871808), the ”Ministerio de Asuntos Económicos y Transformacion Digital” and the European Union-NextGenerationEU in the frameworks of the ”Plan de Recuperación, Transformación y Resiliencia” and of the ”Mecanismo de Recuperación y Resiliencia” under references TSI-063000-2021-39/40/41, and the CHIST-ERA-17-BDSI-003 FIREMAN project funded by the Spanish National Foundation (Grant PCI2019-103780).Peer ReviewedPostprint (published version

    Mitigating the Risk of Knowledge Leakage in Knowledge Intensive Organizations: a Mobile Device Perspective

    Full text link
    In the current knowledge economy, knowledge represents the most strategically significant resource of organizations. Knowledge-intensive activities advance innovation and create and sustain economic rent and competitive advantage. In order to sustain competitive advantage, organizations must protect knowledge from leakage to third parties, particularly competitors. However, the number and scale of leakage incidents reported in news media as well as industry whitepapers suggests that modern organizations struggle with the protection of sensitive data and organizational knowledge. The increasing use of mobile devices and technologies by knowledge workers across the organizational perimeter has dramatically increased the attack surface of organizations, and the corresponding level of risk exposure. While much of the literature has focused on technology risks that lead to information leakage, human risks that lead to knowledge leakage are relatively understudied. Further, not much is known about strategies to mitigate the risk of knowledge leakage using mobile devices, especially considering the human aspect. Specifically, this research study identified three gaps in the current literature (1) lack of in-depth studies that provide specific strategies for knowledge-intensive organizations based on their varied risk levels. Most of the analysed studies provide high-level strategies that are presented in a generalised manner and fail to identify specific strategies for different organizations and risk levels. (2) lack of research into management of knowledge in the context of mobile devices. And (3) lack of research into the tacit dimension of knowledge as the majority of the literature focuses on formal and informal strategies to protect explicit (codified) knowledge.Comment: The University of Melbourne PhD Thesi
    corecore