20 research outputs found

    Extracting Secrets from Encrypted Virtual Machines

    Full text link
    AMD SEV is a hardware extension for main memory encryption on multi-tenant systems. SEV uses an on-chip coprocessor, the AMD Secure Processor, to transparently encrypt virtual machine memory with individual, ephemeral keys never leaving the coprocessor. The goal is to protect the confidentiality of the tenants' memory from a malicious or compromised hypervisor and from memory attacks, for instance via cold boot or DMA. The SEVered attack has shown that it is nevertheless possible for a hypervisor to extract memory in plaintext from SEV-encrypted virtual machines without access to their encryption keys. However, the encryption impedes traditional virtual machine introspection techniques from locating secrets in memory prior to extraction. This can require the extraction of large amounts of memory to retrieve specific secrets and thus result in a time-consuming, obvious attack. We present an approach that allows a malicious hypervisor quick identification and theft of secrets, such as TLS, SSH or FDE keys, from encrypted virtual machines on current SEV hardware. We first observe activities of a virtual machine from within the hypervisor in order to infer the memory regions most likely to contain the secrets. Then, we systematically extract those memory regions and analyze their contents on-the-fly. This allows for the efficient retrieval of targeted secrets, strongly increasing the chances of a fast, robust and stealthy theft.Comment: Accepted for publication at CODASPY 201

    Verifiable Round-Robin Scheme for Smart Homes

    Full text link
    Advances in sensing, networking, and actuation technologies have resulted in the IoT wave that is expected to revolutionize all aspects of modern society. This paper focuses on the new challenges of privacy that arise in IoT in the context of smart homes. Specifically, the paper focuses on preventing the user's privacy via inferences through channel and in-home device activities. We propose a method for securely scheduling the devices while decoupling the device and channels activities. The proposed solution avoids any attacks that may reveal the coordinated schedule of the devices, and hence, also, assures that inferences that may compromise individual's privacy are not leaked due to device and channel level activities. Our experiments also validate the proposed approach, and consequently, an adversary cannot infer device and channel activities by just observing the network traffic.Comment: Accepted in ACM Conference on Data and Application Security and Privacy (CODASPY), 2019. 12 page

    QuanShield: Protecting against Side-Channels Attacks using Self-Destructing Enclaves

    Full text link
    Trusted Execution Environments (TEEs) allow user processes to create enclaves that protect security-sensitive computation against access from the OS kernel and the hypervisor. Recent work has shown that TEEs are vulnerable to side-channel attacks that allow an adversary to learn secrets shielded in enclaves. The majority of such attacks trigger exceptions or interrupts to trace the control or data flow of enclave execution. We propose QuanShield, a system that protects enclaves from side-channel attacks that interrupt enclave execution. The main idea behind QuanShield is to strengthen resource isolation by creating an interrupt-free environment on a dedicated CPU core for running enclaves in which enclaves terminate when interrupts occur. QuanShield avoids interrupts by exploiting the tickless scheduling mode supported by recent OS kernels. QuanShield then uses the save area (SA) of the enclave, which is used by the hardware to support interrupt handling, as a second stack. Through an LLVM-based compiler pass, QuanShield modifies enclave instructions to store/load memory references, such as function frame base addresses, to/from the SA. When an interrupt occurs, the hardware overwrites the data in the SA with CPU state, thus ensuring that enclave execution fails. Our evaluation shows that QuanShield significantly raises the bar for interrupt-based attacks with practical overhead.Comment: 15pages, 5 figures, 5 table

    An Automated Vulnerability Detection Framework for Smart Contracts

    Full text link
    With the increase of the adoption of blockchain technology in providing decentralized solutions to various problems, smart contracts have become more popular to the point that billions of US Dollars are currently exchanged every day through such technology. Meanwhile, various vulnerabilities in smart contracts have been exploited by attackers to steal cryptocurrencies worth millions of dollars. The automatic detection of smart contract vulnerabilities therefore is an essential research problem. Existing solutions to this problem particularly rely on human experts to define features or different rules to detect vulnerabilities. However, this often causes many vulnerabilities to be ignored, and they are inefficient in detecting new vulnerabilities. In this study, to overcome such challenges, we propose a framework to automatically detect vulnerabilities in smart contracts on the blockchain. More specifically, first, we utilize novel feature vector generation techniques from bytecode of smart contract since the source code of smart contracts are rarely available in public. Next, the collected vectors are fed into our novel metric learning-based deep neural network(DNN) to get the detection result. We conduct comprehensive experiments on large-scale benchmarks, and the quantitative results demonstrate the effectiveness and efficiency of our approach

    HyPHEN: A Hybrid Packing Method and Optimizations for Homomorphic Encryption-Based Neural Networks

    Full text link
    Convolutional neural network (CNN) inference using fully homomorphic encryption (FHE) is a promising private inference (PI) solution due to the capability of FHE that enables offloading the whole computation process to the server while protecting the privacy of sensitive user data. Prior FHE-based CNN (HCNN) work has demonstrated the feasibility of constructing deep neural network architectures such as ResNet using FHE. Despite these advancements, HCNN still faces significant challenges in practicality due to the high computational and memory overhead. To overcome these limitations, we present HyPHEN, a deep HCNN construction that incorporates novel convolution algorithms (RAConv and CAConv), data packing methods (2D gap packing and PRCR scheme), and optimization techniques tailored to HCNN construction. Such enhancements enable HyPHEN to substantially reduce the memory footprint and the number of expensive homomorphic operations, such as ciphertext rotation and bootstrapping. As a result, HyPHEN brings the latency of HCNN CIFAR-10 inference down to a practical level at 1.4 seconds (ResNet-20) and demonstrates HCNN ImageNet inference for the first time at 14.7 seconds (ResNet-18).Comment: 15 pages, 12 figure

    Impact and key challenges of insider threats on organizations and critical businesses

    Get PDF
    The insider threat has consistently been identified as a key threat to organizations and governments. Understanding the nature of insider threats and the related threat landscape can help in forming mitigation strategies, including non-technical means. In this paper, we survey and highlight challenges associated with the identification and detection of insider threats in both public and private sector organizations, especially those part of a nation’s critical infrastructure. We explore the utility of the cyber kill chain to understand insider threats, as well as understanding the underpinning human behavior and psychological factors. The existing defense techniques are discussed and critically analyzed, and improvements are suggested, in line with the current state-of-the-art cyber security requirements. Finally, open problems related to the insider threat are identified and future research directions are discussed

    Privacy Policies Across the Ages: Content of Privacy Policies 1996-2021

    Get PDF
    It is well-known that most users do not read privacy policies, but almost always tick the box to agree with them. While the length and readability of privacy policies have been well studied, and many approaches for policy analysis based on natural language processing have been proposed, existing studies are limited in their depth and scope, often focusing on a small number of data practices at single point in time. In this paper, we fill this gap by analyzing the 25-year history of privacy policies using machine learning and natural language processing and presenting a comprehensive analysis of policy contents. Specifically, we collect a large-scale longitudinal corpus of privacy policies from 1996 to 2021 and analyze their content in terms of the data practices they describe, the rights they grant to users, and the rights they reserve for their organizations. We pay particular attention to changes in response to recent privacy regulations such as the GDPR and CCPA. We observe some positive changes, such as reductions in data collection post-GDPR, but also a range of concerning data practices, such as widespread implicit data collection for which users have no meaningful choices or access rights. Our work is an important step towards making privacy policies machine-readable on the user-side, which would help users match their privacy preferences against the policies offered by web services
    corecore