4,903 research outputs found

    Sequential Circuit Design for Embedded Cryptographic Applications Resilient to Adversarial Faults

    Get PDF
    In the relatively young field of fault-tolerant cryptography, the main research effort has focused exclusively on the protection of the data path of cryptographic circuits. To date, however, we have not found any work that aims at protecting the control logic of these circuits against fault attacks, which thus remains the proverbial Achilles’ heel. Motivated by a hypothetical yet realistic fault analysis attack that, in principle, could be mounted against any modular exponentiation engine, even one with appropriate data path protection, we set out to close this remaining gap. In this paper, we present guidelines for the design of multifault-resilient sequential control logic based on standard Error-Detecting Codes (EDCs) with large minimum distance. We introduce a metric that measures the effectiveness of the error detection technique in terms of the effort the attacker has to make in relation to the area overhead spent in implementing the EDC. Our comparison shows that the proposed EDC-based technique provides superior performance when compared against regular N-modular redundancy techniques. Furthermore, our technique scales well and does not affect the critical path delay

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.Peer Reviewed"Article signat per 36 autors/es: Emmanuel Agullo, Mirco Altenbernd, Hartwig Anzt, Leonardo Bautista-Gomez, Tommaso Benacchio, Luca Bonaventura, Hans-Joachim Bungartz, Sanjay Chatterjee, Florina M. Ciorba, Nathan DeBardeleben, Daniel Drzisga, Sebastian Eibl, Christian Engelmann, Wilfried N. Gansterer, Luc Giraud, Dominik G ̈oddeke, Marco Heisig, Fabienne Jezequel, Nils Kohl, Xiaoye Sherry Li, Romain Lion, Miriam Mehl, Paul Mycek, Michael Obersteiner, Enrique S. Quintana-Ortiz, Francesco Rizzi, Ulrich Rude, Martin Schulz, Fred Fung, Robert Speck, Linda Stals, Keita Teranishi, Samuel Thibault, Dominik Thonnes, Andreas Wagner and Barbara Wohlmuth"Postprint (author's final draft

    Efficient variational quantum simulator incorporating active error minimisation

    Full text link
    One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc, in order to accelerate the process of discovery. It is important to ask: Can this be achieved using near future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimised by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimised Trotterisation technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.Comment: 13 pages, 5 figures; typos fixed and small update

    A survey on gain-scheduled control and filtering for parameter-varying systems

    Get PDF
    Copyright © 2014 Guoliang Wei et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This paper presents an overview of the recent developments in the gain-scheduled control and filtering problems for the parameter-varying systems. First of all, we recall several important algorithms suitable for gain-scheduling method including gain-scheduled proportional-integral derivative (PID) control, H 2, H ∞ and mixed H 2 / H ∞ gain-scheduling methods as well as fuzzy gain-scheduling techniques. Secondly, various important parameter-varying system models are reviewed, for which gain-scheduled control and filtering issues are usually dealt with. In particular, in view of the randomly occurring phenomena with time-varying probability distributions, some results of our recent work based on the probability-dependent gain-scheduling methods are reviewed. Furthermore, some latest progress in this area is discussed. Finally, conclusions are drawn and several potential future research directions are outlined.The National Natural Science Foundation of China under Grants 61074016, 61374039, 61304010, and 61329301; the Natural Science Foundation of Jiangsu Province of China under Grant BK20130766; the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning; the Program for New Century Excellent Talents in University under Grant NCET-11-1051, the Leverhulme Trust of the U.K., the Alexander von Humboldt Foundation of Germany
    • …
    corecore