617 research outputs found

    Architecture for performing secure computation on encrypted data

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 97-101).This thesis considers encrypted computation where the user specifies encrypted inputs to an untrusted batch program controlled by an untrusted server. In batch computation, all data that the program might need is known at program start time. Encrypted computation on untrusted batch programs can be realized through fully homomorphic encryption (FHE) techniques, but FHE's current overheads limit its applicability. Secure processors (e.g., Aegis), coprocessors (e.g., TPM) or hardware extensions (e.g., TXT) typically require trust in the entire processor, the host operating system and the program that computes on the inputs. In this thesis, we design a secure processor architecture, called Ascend, that guarantees privacy of data given untrusted batch programs. The key idea in Ascend to guarantee privacy is parameterizable, obfuscated program execution. From the perspective of the Ascend chip's input/output and power pins, an untrusted server cannot learn anything about private user data regardless of the program run. Ascend uses Oblivious RAM (ORAM) techniques to hide memory access patterns and differential-power analysis (DPA) resistance techniques to hide data-dependent power draw. For each of the input/output and power channels, an Ascend chip exposes a set of public knobs that fully specify the observable behavior of the chip given any batch program and any input to that batch program. These knobs (e.g., specifying strict intervals for when external memory should be accessed) are controlled by the server and can be tuned, based on the server's apriori knowledge of the program, to trade-off performance and power without impacting security. Experimental results when running Ascend on SPEC benchmarks show an average 3.6x /6.6x and 5.2x /4.7x performance/power overhead-when hiding memory access pattern and power draw-using two schemes that capture the server's apriori knowledge in different ways. Furthermore-when hiding memory access pattern only-performance/power overheads drop to only 2.6x/2.2x. These surprising results mean that it is viable to only trust hardware and not software in some security-conscious applications.by Christopher W. Fletcher.S.M

    ZeroTrace : Oblivious Memory Primitives from Intel SGX

    Get PDF
    We are witnessing a confluence between applied cryptography and secure hardware systems in enabling secure cloud computing. On one hand, work in applied cryptography has enabled efficient, oblivious data-structures and memory primitives. On the other, secure hardware and the emergence of Intel SGX has enabled a low-overhead and mass market mechanism for isolated execution. By themselves these technologies have their disadvantages. Oblivious memory primitives carry high performance overheads, especially when run non-interactively. Intel SGX, while more efficient, suffers from numerous software-based side-channel attacks, high context switching costs, and bounded memory size. In this work we build a new library of oblivious memory primitives, which we call ZeroTrace. ZeroTrace is designed to carefully combine state-of-the-art oblivious RAM techniques and SGX, while mitigating individual disadvantages of these technologies. To the best of our knowledge, ZeroTrace represents the first oblivious memory primitives running on a real secure hardware platform. ZeroTrace simultaneously enables a dramatic speed-up over pure cryptography and protection from software-based side-channel attacks. The core of our design is an efficient and flexible block-level memory controller that provides oblivious execution against any active software adversary, and across asynchronous SGX enclave terminations. Performance-wise, the memory controller can service requests for 4~B blocks in 1.2~ms and 1~KB blocks in 3.4~ms (given a 10~GB dataset). On top of our memory controller, we evaluate Set/Dictionary/List interfaces which can all perform basic operations (e.g., get/put/insert)

    TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators

    Full text link
    Over the past few years, the explosion in sparse tensor algebra workloads has led to a corresponding rise in domain-specific accelerators to service them. Due to the irregularity present in sparse tensors, these accelerators employ a wide variety of novel solutions to achieve good performance. At the same time, prior work on design-flexible sparse accelerator modeling does not express this full range of design features, making it difficult to understand the impact of each design choice and compare or extend the state-of-the-art. To address this, we propose TeAAL: a language and compiler for the concise and precise specification and evaluation of sparse tensor algebra architectures. We use TeAAL to represent and evaluate four disparate state-of-the-art accelerators--ExTensor, Gamma, OuterSPACE, and SIGMA--and verify that it reproduces their performance with high accuracy. Finally, we demonstrate the potential of TeAAL as a tool for designing new accelerators by showing how it can be used to speed up Graphicionado--by 38×38\times on BFS and 4.3×4.3\times on SSSP.Comment: 14 pages, 12 figure

    Small-molecule CaVα1⋅CaVβ antagonist suppresses neuronal voltage-gated calcium-channel trafficking

    Get PDF
    Extracellular calcium flow through neuronal voltage-gated CaV2.2 calcium channels converts action potential-encoded information to the release of pronociceptive neurotransmitters in the dorsal horn of the spinal cord, culminating in excitation of the postsynaptic central nociceptive neurons. The CaV2.2 channel is composed of a pore-forming α1 subunit (CaVα1) that is engaged in protein-protein interactions with auxiliary α2/δ and β subunits. The high-affinity CaV2.2α1⋅CaVβ3 protein-protein interaction is essential for proper trafficking of CaV2.2 channels to the plasma membrane. Here, structure-based computational screening led to small molecules that disrupt the CaV2.2α1⋅CaVβ3 protein-protein interaction. The binding mode of these compounds reveals that three substituents closely mimic the side chains of hot-spot residues located on the α-helix of CaV2.2α1 Site-directed mutagenesis confirmed the critical nature of a salt-bridge interaction between the compounds and CaVβ3 Arg-307. In cells, compounds decreased trafficking of CaV2.2 channels to the plasma membrane and modulated the functions of the channel. In a rodent neuropathic pain model, the compounds suppressed pain responses. Small-molecule α-helical mimetics targeting ion channel protein-protein interactions may represent a strategy for developing nonopioid analgesia and for treatment of other neurological disorders associated with calcium-channel trafficking

    Data Oblivious ISA Extensions for Side Channel-Resistant and High Performance Computing

    Get PDF
    Blocking microarchitectural (digital) side channels is one of the most pressing challenges in hardware security today. Recently, there has been a surge of effort that attempts to block these leakages by writing programs data obliviously. In this model, programs are written to avoid placing sensitive data-dependent pressure on shared resources. Despite recent efforts, however, running data oblivious programs on modern machines today is insecure and low performance. First, writing programs obliviously assumes certain instructions in today\u27s ISAs will not leak privacy, whereas today\u27s ISAs and hardware provide no such guarantees. Second, writing programs to avoid data-dependent behavior is inherently high performance overhead. This paper tackles both the security and performance aspects of this problem by proposing a Data Oblivious ISA extension (OISA). On the security side, we present ISA design principles to block microarchitectural side channels, and embody these ideas in a concrete ISA capable of safely executing existing data oblivious programs. On the performance side, we design the OISA with support for efficient memory oblivious computation, and with safety features that allow modern hardware optimizations, e.g., out-of-order speculative execution, to remain enabled in the common case. We provide a complete hardware prototype of our ideas, built on top of the RISC-V out-of-order, speculative BOOM processor, and prove that the OISA can provide the advertised security through a formal analysis of an abstract BOOM-style machine. We evaluate area overhead of hardware mechanisms needed to support our prototype, and provide performance experiments showing how the OISA speeds up a variety of existing data oblivious codes (including ``constant time\u27\u27 cryptography and memory oblivious data structures), in addition to improving their security and portability

    Design space exploration and optimization of path oblivious RAM in secure processors

    Get PDF
    Keeping user data private is a huge problem both in cloud computing and computation outsourcing. One paradigm to achieve data privacy is to use tamper-resistant processors, inside which users' private data is decrypted and computed upon. These processors need to interact with untrusted external memory. Even if we encrypt all data that leaves the trusted processor, however, the address sequence that goes off-chip may still leak information. To prevent this address leakage, the security community has proposed ORAM (Oblivious RAM). ORAM has mainly been explored in server/file settings which assume a vastly different computation model than secure processors. Not surprisingly, naïvely applying ORAM to a secure processor setting incurs large performance overheads. In this paper, a recent proposal called Path ORAM is studied. We demonstrate techniques to make Path ORAM practical in a secure processor setting. We introduce background eviction schemes to prevent Path ORAM failure and allow for a performance-driven design space exploration. We propose a concept called super blocks to further improve Path ORAM's performance, and also show an efficient integrity verification scheme for Path ORAM. With our optimizations, Path ORAM overhead drops by 41.8%, and SPEC benchmark execution time improves by 52.4% in relation to a baseline configuration. Our work can be used to improve the security level of previous secure processors.National Science Foundation (U.S.). Graduate Research Fellowship Program (Grant 1122374)American Society for Engineering Education. National Defense Science and Engineering Graduate FellowshipUnited States. Defense Advanced Research Projects Agency (Clean-slate design of Resilient, Adaptive, Secure Hosts Contract N66001-10-2-4089
    • …
    corecore