62,048 research outputs found

    Static-Memory-Hard Functions, and Modeling the Cost of Space vs. Time

    Get PDF
    A series of recent research starting with (Alwen and Serbinenko, STOC 2015) has deepened our understanding of the notion of memory-hardness in cryptography — a useful property of hash functions for deterring large-scale password-cracking attacks — and has shown memory-hardness to have intricate connections with the theory of graph pebbling. Definitions of memory-hardness are not yet unified in the somewhat nascent field of memory-hardness, however, and the guarantees proven to date are with respect to a range of proposed definitions. In this paper, we observe two significant and practical considerations that are not analyzed by existing models of memory-hardness, and propose new models to capture them, accompanied by constructions based on new hard-to-pebble graphs. Our contribution is two-fold, as follows. First, existing measures of memory-hardness only account for dynamic memory usage (i.e., memory read/written at runtime), and do not consider static memory usage (e.g., memory on disk). Among other things, this means that memory requirements considered by prior models are inherently upper-bounded by a hash function’s runtime; in contrast, counting static memory would potentially allow quantification of much larger memory requirements, decoupled from runtime. We propose a new definition of static-memory-hard function (SHF) which takes static memory into account: we model static memory usage by oracle access to a large preprocessed string, which may be considered part of the hash function description. Static memory requirements are complementary to dynamic memory requirements: neither can replace the other, and to deter large-scale password-cracking attacks, a hash function will benefit from being both dynamic memory-hard and static-memory-hard. We give two SHF constructions based on pebbling. To prove static-memory-hardness, we define a new pebble game (“black-magic pebble game”), and new graph constructions with optimal complexity under our proposed measure. Moreover, we provide a prototype implementation of our first SHF construction (which is based on pebbling of a simple “cylinder” graph), providing an initial demonstration of practical feasibility for a limited range of parameter settings. Secondly, existing memory-hardness models implicitly assume that the cost of space and time are more or less on par: they consider only linear ratios between the costs of time and space. We propose a new model to capture nonlinear time-space trade-offs: e.g., how is the adversary impacted when space is quadratically more expensive than time? We prove that nonlinear tradeoffs can in fact cause adversaries to employ different strategies from linear tradeoffs. Finally, as an additional contribution of independent interest, we present an asymptotically tight graph construction that achieves the best possible space complexity up to log log n-factors for an existing memory-hardness measure called cumulative complexity in the sequential pebbling model

    Inferring Energy Bounds via Static Program Analysis and Evolutionary Modeling of Basic Blocks

    Full text link
    The ever increasing number and complexity of energy-bound devices (such as the ones used in Internet of Things applications, smart phones, and mission critical systems) pose an important challenge on techniques to optimize their energy consumption and to verify that they will perform their function within the available energy budget. In this work we address this challenge from the software point of view and propose a novel parametric approach to estimating tight bounds on the energy consumed by program executions that are practical for their application to energy verification and optimization. Our approach divides a program into basic (branchless) blocks and estimates the maximal and minimal energy consumption for each block using an evolutionary algorithm. Then it combines the obtained values according to the program control flow, using static analysis, to infer functions that give both upper and lower bounds on the energy consumption of the whole program and its procedures as functions on input data sizes. We have tested our approach on (C-like) embedded programs running on the XMOS hardware platform. However, our method is general enough to be applied to other microprocessor architectures and programming languages. The bounds obtained by our prototype implementation can be tight while remaining on the safe side of budgets in practice, as shown by our experimental evaluation.Comment: Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854). Improved version of the one presented at the HIP3ES 2016 workshop (v1): more experimental results (added benchmark to Table 1, added figure for new benchmark, added Table 3), improved Fig. 1, added Fig.

    Simulation of Rapidly-Exploring Random Trees in Membrane Computing with P-Lingua and Automatic Programming

    Get PDF
    Methods based on Rapidly-exploring Random Trees (RRTs) have been widely used in robotics to solve motion planning problems. On the other hand, in the membrane computing framework, models based on Enzymatic Numerical P systems (ENPS) have been applied to robot controllers, but today there is a lack of planning algorithms based on membrane computing for robotics. With this motivation, we provide a variant of ENPS called Random Enzymatic Numerical P systems with Proteins and Shared Memory (RENPSM) addressed to implement RRT algorithms and we illustrate it by simulating the bidirectional RRT algorithm. This paper is an extension of [21]a. The software presented in [21] was an ad-hoc simulator, i.e, a tool for simulating computations of one and only one model that has been hard-coded. The main contribution of this paper with respect to [21] is the introduction of a novel solution for membrane computing simulators based on automatic programming. First, we have extended the P-Lingua syntax –a language to define membrane computing models– to write RENPSM models. Second, we have implemented a new parser based on Flex and Bison to read RENPSM models and produce source code in C language for multicore processors with OpenMP. Finally, additional experiments are presented.Ministerio de Economía, Industria y Competitividad TIN2017-89842-

    Transparently Mixing Undo Logs and Software Reversibility for State Recovery in Optimistic PDES

    Get PDF
    The rollback operation is a fundamental building block to support the correct execution of a speculative Time Warp-based Parallel Discrete Event Simulation. In the literature, several solutions to reduce the execution cost of this operation have been proposed, either based on the creation of a checkpoint of previous simulation state images, or on the execution of negative copies of simulation events which are able to undo the updates on the state. In this paper, we explore the practical design and implementation of a state recoverability technique which allows to restore a previous simulation state either relying on checkpointing or on the reverse execution of the state updates occurred while processing events in forward mode. Differently from other proposals, we address the issue of executing backward updates in a fully-transparent and event granularity-independent way, by relying on static software instrumentation (targeting the x86 architecture and Linux systems) to generate at runtime reverse update code blocks (not to be confused with reverse events, proper of the reverse computing approach). These are able to undo the effects of a forward execution while minimizing the cost of the undo operation. We also present experimental results related to our implementation, which is released as free software and fully integrated into the open source ROOT-Sim (ROme OpTimistic Simulator) package. The experimental data support the viability and effectiveness of our proposal
    • …
    corecore