1,871 research outputs found
DR.SGX: Hardening SGX Enclaves against Cache Attacks with Data Location Randomization
Recent research has demonstrated that Intel's SGX is vulnerable to various
software-based side-channel attacks. In particular, attacks that monitor CPU
caches shared between the victim enclave and untrusted software enable accurate
leakage of secret enclave data. Known defenses assume developer assistance,
require hardware changes, impose high overhead, or prevent only some of the
known attacks. In this paper we propose data location randomization as a novel
defensive approach to address the threat of side-channel attacks. Our main goal
is to break the link between the cache observations by the privileged adversary
and the actual data accesses by the victim. We design and implement a
compiler-based tool called DR.SGX that instruments enclave code such that data
locations are permuted at the granularity of cache lines. We realize the
permutation with the CPU's cryptographic hardware-acceleration units providing
secure randomization. To prevent correlation of repeated memory accesses we
continuously re-randomize all enclave data during execution. Our solution
effectively protects many (but not all) enclaves from cache attacks and
provides a complementary enclave hardening technique that is especially useful
against unpredictable information leakage
Survey and Benchmark of Block Ciphers for Wireless Sensor Networks
Cryptographic algorithms play an important role in the security architecture of wireless sensor networks (WSNs). Choosing the most storage- and energy-efficient block cipher is essential, due to the facts that these networks are meant to operate without human intervention for a long period of time with little energy supply, and that available storage is scarce on these sensor nodes. However, to our knowledge, no systematic work has been done in this area so far.We construct an evaluation framework in which we first identify the candidates of block ciphers suitable for WSNs, based on existing literature and authoritative recommendations. For evaluating and assessing these candidates, we not only consider the security properties but also the storage- and energy-efficiency of the candidates. Finally, based on the evaluation results, we select the most suitable ciphers for WSNs, namely Skipjack, MISTY1, and Rijndael, depending on the combination of available memory and required security (energy efficiency being implicit). In terms of operation mode, we recommend Output Feedback Mode for pairwise links but Cipher Block Chaining for group communications
Survey and future trends of efficient cryptographic function implementations on GPGPUs
Many standard cryptographic functions are designed to benefit from hardware specific implementations. As a result, there have been a large number of highly efficient ASIC and FPGA hardware based implementations of standard cryptographic functions. Previously, hardware accelerated devices were only available to a limited set of users. General Purpose Graphic Processing Units (GPGPUs) have become a standard consumer item and have demonstrated orders of magnitude performance improvements for general purpose computation, including cryptographic functions. This paper reviews the current and future trends in GPU technology, and examines its potential impact on current cryptographic practice
Regular and almost universal hashing: an efficient implementation
Random hashing can provide guarantees regarding the performance of data
structures such as hash tables---even in an adversarial setting. Many existing
families of hash functions are universal: given two data objects, the
probability that they have the same hash value is low given that we pick hash
functions at random. However, universality fails to ensure that all hash
functions are well behaved. We further require regularity: when picking data
objects at random they should have a low probability of having the same hash
value, for any fixed hash function. We present the efficient implementation of
a family of non-cryptographic hash functions (PM+) offering good running times,
good memory usage as well as distinguishing theoretical guarantees: almost
universality and component-wise regularity. On a variety of platforms, our
implementations are comparable to the state of the art in performance. On
recent Intel processors, PM+ achieves a speed of 4.7 bytes per cycle for 32-bit
outputs and 3.3 bytes per cycle for 64-bit outputs. We review vectorization
through SIMD instructions (e.g., AVX2) and optimizations for superscalar
execution.Comment: accepted for publication in Software: Practice and Experience in
September 201
Optimization and Regulation of Performance for Computing Systems
The current demands of computing applications, the advent of technological advances related to hardware and software, the contractual relationship between users and cloud service providers and current ecological demands, require the re\ufb01nement of performance regulation on computing systems. Powerful mathematical tools such as control systems theory, discrete event systems (DES) and randomized algorithms (RAs) have o\ufb00ered improvements in e\ufb03ciency and performance in computer scenarios where the traditional approach has been the application of well founded common sense and heuristics. The comprehensive concept of computing systems is equally related to a microprocessor unit, a set of microprocessor units in a server, a set of servers interconnected in a data center or even a network of data centers forming a cloud of virtual resources. In this dissertation, we explore theoretical approaches in order to optimize and regulate performance measures in di\ufb00erent computing systems. In several cases, such as cloud services, this optimization would allow the fair negotiation of service level agreements (SLAs) between a user and a cloud service provider, that may be objectively measured for the bene\ufb01t of both negotiators. Although DES are known to be suitable for modeling computing systems, we still \ufb01nd that traditional control theory approaches, such as passivity analysis, may o\ufb00er solutions that are worth being explored. Moreover, as the size of the problem increases, so does its complexity. RAs o\ufb00er good alternatives to make decisions on the design of the solutions of such complex problems based on given values of con\ufb01dence and accuracy. In this dissertation, we propose the development of: a) a methodology to optimize performance on a many-core processor system, b) a methodology to optimize and regulate performance on a multitier server, c) some corrections to a previously proposed passivity analysis of a market-oriented cloud model, and d) a decentralized methodology to optimize cloud performance. In all the aforementioned systems, we are interested in developing optimization methods strongly supported on DES theory, speci\ufb01cally In\ufb01nitesimal Perturbation Analysis (IPA) and RAs based on sample complexity to guarantee that these computing systems will satisfy the required optimal performance on the average
- …