40,641 research outputs found

    Foundations, Properties, and Security Applications of Puzzles: A Survey

    Full text link
    Cryptographic algorithms have been used not only to create robust ciphertexts but also to generate cryptograms that, contrary to the classic goal of cryptography, are meant to be broken. These cryptograms, generally called puzzles, require the use of a certain amount of resources to be solved, hence introducing a cost that is often regarded as a time delay---though it could involve other metrics as well, such as bandwidth. These powerful features have made puzzles the core of many security protocols, acquiring increasing importance in the IT security landscape. The concept of a puzzle has subsequently been extended to other types of schemes that do not use cryptographic functions, such as CAPTCHAs, which are used to discriminate humans from machines. Overall, puzzles have experienced a renewed interest with the advent of Bitcoin, which uses a CPU-intensive puzzle as proof of work. In this paper, we provide a comprehensive study of the most important puzzle construction schemes available in the literature, categorizing them according to several attributes, such as resource type, verification type, and applications. We have redefined the term puzzle by collecting and integrating the scattered notions used in different works, to cover all the existing applications. Moreover, we provide an overview of the possible applications, identifying key requirements and different design approaches. Finally, we highlight the features and limitations of each approach, providing a useful guide for the future development of new puzzle schemes.Comment: This article has been accepted for publication in ACM Computing Survey

    Computer Architectures to Close the Loop in Real-time Optimization

    Get PDF
    Ā© 2015 IEEE.Many modern control, automation, signal processing and machine learning applications rely on solving a sequence of optimization problems, which are updated with measurements of a real system that evolves in time. The solutions of each of these optimization problems are then used to make decisions, which may be followed by changing some parameters of the physical system, thereby resulting in a feedback loop between the computing and the physical system. Real-time optimization is not the same as fast optimization, due to the fact that the computation is affected by an uncertain system that evolves in time. The suitability of a design should therefore not be judged from the optimality of a single optimization problem, but based on the evolution of the entire cyber-physical system. The algorithms and hardware used for solving a single optimization problem in the office might therefore be far from ideal when solving a sequence of real-time optimization problems. Instead of there being a single, optimal design, one has to trade-off a number of objectives, including performance, robustness, energy usage, size and cost. We therefore provide here a tutorial introduction to some of the questions and implementation issues that arise in real-time optimization applications. We will concentrate on some of the decisions that have to be made when designing the computing architecture and algorithm and argue that the choice of one informs the other

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    Plug-and-play and coordinated control for bus-connected AC islanded microgrids

    Full text link
    This paper presents a distributed control architecture for voltage and frequency stabilization in AC islanded microgrids. In the primary control layer, each generation unit is equipped with a local controller acting on the corresponding voltage-source converter. Following the plug-and-play design approach previously proposed by some of the authors, whenever the addition/removal of a distributed generation unit is required, feasibility of the operation is automatically checked by designing local controllers through convex optimization. The update of the voltage-control layer, when units plug -in/-out, is therefore automatized and stability of the microgrid is always preserved. Moreover, local control design is based only on the knowledge of parameters of power lines and it does not require to store a global microgrid model. In this work, we focus on bus-connected microgrid topologies and enhance the primary plug-and-play layer with local virtual impedance loops and secondary coordinated controllers ensuring bus voltage tracking and reactive power sharing. In particular, the secondary control architecture is distributed, hence mirroring the modularity of the primary control layer. We validate primary and secondary controllers by performing experiments with balanced, unbalanced and nonlinear loads, on a setup composed of three bus-connected distributed generation units. Most importantly, the stability of the microgrid after the addition/removal of distributed generation units is assessed. Overall, the experimental results show the feasibility of the proposed modular control design framework, where generation units can be added/removed on the fly, thus enabling the deployment of virtual power plants that can be resized over time
    • ā€¦
    corecore