128 research outputs found

    Explicit Optimal Binary Pebbling for One-Way Hash Chain Reversal

    Get PDF
    We present explicit optimal binary pebbling algorithms for reversing one-way hash chains. For a hash chain of length 2k2^k, the number of hashes performed in each output round does not exceed k/2\lceil k/2 \rceil, whereas the number of hash values stored (pebbles) throughout is at most kk. This is optimal for binary pebbling algorithms characterized by the property that the midpoint of the hash chain is computed just once and stored until it is output, and that this property applies recursively to both halves of the hash chain. We introduce a framework for rigorous comparison of explicit binary pebbling algorithms, including simple speed-1 binary pebbling, Jakobsson\u27s speed-2 binary pebbling, and our optimal binary pebbling algorithm. Explicit schedules describe for each pebble exactly how many hashes need to be performed in each round. The optimal schedule turns out to be essentially unique and exhibits a nice recursive structure, which allows for fully optimized implementations that can readily be deployed. In particular, we develop the first in-place implementations with minimal storage overhead (essentially, storing only hash values), and fast implementations with minimal computational overhead. Moreover, we show that our approach is not limited to hash chains of length n=2kn=2^k, but accommodates hash chains of arbitrary length n1n\geq1, without incurring any overhead. Finally, we show how to run a cascade of pebbling algorithms along with a bootstrapping technique, facilitating sequential reversal of an unlimited number of hash chains growing in length up to a given bound

    High-Performance In-Memory OLTP via Coroutine-to-Transaction

    Get PDF
    Data stalls are a major overhead in main-memory database engines due to the use of pointer-rich data structures. Lightweight coroutines ease the implementation of software prefetching to hide data stalls by overlapping computation and asynchronous data prefetching. Prior solutions, however, mainly focused on (1) individual components and operations and (2) intra-transaction batching that requires interface changes, breaking backward compatibility. It was not clear how they apply to a full database engine and how much end-to-end benefit they bring under various workloads. This thesis presents CoroBase, a main-memory database engine that tackles these challenges with a new coroutine-to-transaction paradigm. Coroutine-to-transaction models transactions as coroutines and thus enables inter-transaction batching, avoiding application changes but retaining the benefits of prefetching. We show that on a 48-core server, CoroBase can perform close to 2× better for read-intensive workloads and remain competitive for workloads that inherently do not benefit from software prefetching

    Practical Forward Secure Signatures using Minimal Security Assumptions

    Get PDF
    Digital signatures are one of the most important cryptographic primitives in practice. They are an enabling technology for eCommerce and eGovernment applications and they are used to distribute software updates over the Internet in a secure way. In this work we introduce two new digital signature schemes: XMSS and its extension XMSS^MT. We present security proofs for both schemes in the standard model, analyze their performance, and discuss parameter selection. Both our schemes have certain properties that make them favorable compared to today's signature schemes. Our schemes are forward secure, meaning even in case of a key compromise, previously generated signatures can be trusted. This is an important property whenever a signature has to be verifiable in the mid- or long-term. Moreover, our signature schemes are generic constructions that can be instantiated using any hash function. Thereby, if a used hash function becomes insecure for some reason, we can simply replace it by a secure one to obtain a new secure instantiation. The properties we require the hash function to provide are minimal. This implies that as long as there exists any complexity-based cryptography, there exists a secure instantiation for our schemes. In addition, our schemes are secure against quantum computer aided attacks, as long as the used hash functions are. We analyze the performance of our schemes from a theoretical and a practical point of view. On the one hand, we show that given an efficient hash function, we can obtain an efficient instantiation for our schemes. On the other hand, we provide experimental data that show that the performance of our schemes is comparable to that of today's signature schemes. Besides, we show how to select optimal parameters for a given use case that provably reach a given level of security. On the way of constructing XMSS and XMSS^MT, we introduce two new one-time signature schemes (OTS): WOTS+ and WOTS.Onetimesignatureschemesaresignatureschemeswhereakeypairmayonlybeusedonce.WOTS+iscurrentlythemostefficienthashbasedOTSandWOTS. One-time signature schemes are signature schemes where a key pair may only be used once. WOTS+ is currently the most efficient hash-based OTS and WOTS the most efficient hash-based OTS with minimal security assumptions. One-time signature schemes have many more applications besides constructing full fledged signature schemes, including authentication in sensor networks and the construction of chosen-ciphertext secure encryption schemes. Hence, WOTS+ and WOTS$ are contributions on their own. Altogether, this work shows the practicality and usability of forward secure signatures on the one hand and hash-based signatures on the other hand

    An innovative approach of blending security features in energy-efficient routing for a crowded network of wireless sensors

    Get PDF
    Wireless sensor networks (WSN) are emerging as both an important new tier in the IT (information technology) ecosystem and a rich domain of active research involving hardware and system design, networking, distributed algorithms, programming models, data management, security, and social factors [1,2]. The basic idea of a sensor network is to disperse tiny sensing devices over a specific target area. These devices are capable of sensing certain changes of incidents or parameters and of communicating with other devices. WSNs could be very useful for providing support for some specific purposes, such as target tracking, surveillance, environmental monitoring, etc. Today’s sensors can monitor temperature, pressure, humidity, soil makeup, vehicular movement, noise levels, lighting conditions, the presence or absence of certain kinds of objects or substances, mechanical stress levels on attached objects, and other properties. As such types of networks are composed of resource-constrained tiny sensor nodes, many research works have tried to focus on efficient use of the available resources of the sensors. Energy is, in fact, one of the most critical factors that play a great role to define the duration of an active and operable network. Energy efficiency is often very crucial in these sorts of networks as the power sources of the inexpensive sensors are (in most of the cases) not replaceable after deployment. If any intermediate node between any two communicating nodes runs out of battery power, the link between the end nodes is eventually broken. So any protocol should ensure a competent way of utilizing the energies of the sensors so that a fair connectivity of the network could be ensured throughout its operation time. Energy efficiency is also very necessary to maximize the lifetime of the network

    Accelerators for Data Processing

    Get PDF
    The explosive growth in digital data and its growing role in real-time analytics motivate the design of high-performance database management systems (DBMSs). Meanwhile, slowdown in supply voltage scaling has stymied improvements in core performance and ushered an era of power-limited chips. These developments motivate the design of software and hardware DBMS accelerators that (1) maximize utility by accelerating the dominant operations, and (2) provide flexibility in the choice of DBMS, data layout, and data types. In this thesis, we identify pointer-intensive data structure operations as a key performance and efficiency bottleneck in data analytics workloads. We observe that data analytics tasks include a large number of independent data structure lookups, each of which is characterized by dependent long-latency memory accesses due to pointer chasing. Unfortunately, exploiting such inter-lookup parallelism to overlap memory accesses from different lookups is not possible within the limited instruction window of modern out-of-order cores. Similarly, software prefetching techniques attempt to exploit inter-lookup parallelism by statically staging independent lookups, and hence break down in the face of irregularity across lookup stages. Based on these observations, we provide a dynamic software acceleration scheme for exploiting inter-lookup parallelism to hide the memory access latency despite the irregularities across lookups. Furthermore, we propose a programmable hardware accelerator to maximize the efficiency of the data structure lookups. As a result, through flexible hardware and software techniques we eliminate a key efficiency and performance bottleneck in data analytics operations

    How To Touch a Running System

    Get PDF
    The increasing importance of distributed and decentralized software architectures entails more and more attention for adaptive software. Obtaining adaptiveness, however, is a difficult task as the software design needs to foresee and cope with a variety of situations. Using reconfiguration of components facilitates this task, as the adaptivity is conducted on an architecture level instead of directly in the code. This results in a separation of concerns; the appropriate reconfiguration can be devised on a coarse level, while the implementation of the components can remain largely unaware of reconfiguration scenarios. We study reconfiguration in component frameworks based on formal theory. We first discuss programming with components, exemplified with the development of the cmc model checker. This highly efficient model checker is made of C++ components and serves as an example for component-based software development practice in general, and also provides insights into the principles of adaptivity. However, the component model focuses on high performance and is not geared towards using the structuring principle of components for controlled reconfiguration. We thus complement this highly optimized model by a message passing-based component model which takes reconfigurability to be its central principle. Supporting reconfiguration in a framework is about alleviating the programmer from caring about the peculiarities as much as possible. We utilize the formal description of the component model to provide an algorithm for reconfiguration that retains as much flexibility as possible, while avoiding most problems that arise due to concurrency. This algorithm is embedded in a general four-stage adaptivity model inspired by physical control loops. The reconfiguration is devised to work with stateful components, retaining their data and unprocessed messages. Reconfiguration plans, which are provided with a formal semantics, form the input of the reconfiguration algorithm. We show that the algorithm achieves perceived atomicity of the reconfiguration process for an important class of plans, i.e., the whole process of reconfiguration is perceived as one atomic step, while minimizing the use of blocking of components. We illustrate the applicability of our approach to reconfiguration by providing several examples like fault-tolerance and automated resource control

    Assorted algorithms and protocols for secure computation

    Get PDF

    Assorted algorithms and protocols for secure computation

    Get PDF
    corecore