2,349 research outputs found

    Swiper and Dora: efficient solutions to weighted distributed problems

    Full text link
    The majority of fault-tolerant distributed algorithms are designed assuming a nominal corruption model, in which at most a fraction fnf_n of parties can be corrupted by the adversary. However, due to the infamous Sybil attack, nominal models are not sufficient to express the trust assumptions in open (i.e., permissionless) settings. Instead, permissionless systems typically operate in a weighted model, where each participant is associated with a weight and the adversary can corrupt a set of parties holding at most a fraction fwf_w of total weight. In this paper, we suggest a simple way to transform a large class of protocols designed for the nominal model into the weighted model. To this end, we formalize and solve three novel optimization problems, which we collectively call the weight reduction problems, that allow us to map large real weights into small integer weights while preserving the properties necessary for the correctness of the protocols. In all cases, we manage to keep the sum of the integer weights to be at most linear in the number of parties, resulting in extremely efficient protocols for the weighted model. Moreover, we demonstrate that, on weight distributions that emerge in practice, the sum of the integer weights tends to be far from the theoretical worst-case and, often even smaller than the number of participants. While, for some protocols, our transformation requires an arbitrarily small reduction in resilience (i.e., fw=fn−ϵf_w = f_n - \epsilon), surprisingly, for many important problems we manage to obtain weighted solutions with the same resilience (fw=fnf_w = f_n) as nominal ones. Notable examples include asynchronous consensus, verifiable secret sharing, erasure-coded distributed storage and broadcast protocols

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria

    Active Learning in Physics: From 101, to Progress, and Perspective

    Full text link
    Active Learning (AL) is a family of machine learning (ML) algorithms that predates the current era of artificial intelligence. Unlike traditional approaches that require labeled samples for training, AL iteratively selects unlabeled samples to be annotated by an expert. This protocol aims to prioritize the most informative samples, leading to improved model performance compared to training with all labeled samples. In recent years, AL has gained increasing attention, particularly in the field of physics. This paper presents a comprehensive and accessible introduction to the theory of AL reviewing the latest advancements across various domains. Additionally, we explore the potential integration of AL with quantum ML, envisioning a synergistic fusion of these two fields rather than viewing AL as a mere extension of classical ML into the quantum realm.Comment: 15 page

    Distributed Protocols with Threshold and General Trust Assumptions

    Get PDF
    Distributed systems today power almost all online applications. Consequently, a wide range of distributed protocols, such as consensus, and distributed cryptographic primitives are being researched and deployed in practice. This thesis addresses multiple aspects of distributed protocols and cryptographic schemes, enhancing their resilience, efficiency, and scalability. Fundamental to every secure distributed protocols are its trust assumptions. These assumptions not only measure a protocol's resilience but also determine its scope of application, as well as, in some sense, the expressiveness and freedom of the participating parties. Dominant in practice is so far the threshold setting, where at most some f out of the n parties may fail in any execution. However, in this setting, all parties are viewed as identical, making correlations indescribable. These constraints can be surpassed with general trust assumptions, which allow arbitrary sets of parties to fail in an execution. Despite significant theoretical efforts, relevant practical aspects of this setting are yet to be addressed. Our work fills this gap. We show how general trust assumptions can be efficiently specified, encoded, and used in distributed protocols and cryptographic schemes. Additionally, we investigate a consensus protocol and distributed cryptographic schemes with general trust assumptions. Moreover, we show how the general trust assumptions of different systems, with intersecting or disjoint sets of participants, can be composed into a unified system. When it comes to decentralized systems, such as blockchains, efficiency and scalability are often compromised due to the total ordering of all user transactions. Guerraoui (Distributed Computing, 2022) have contradicted the common design of major blockchains, proving that consensus is not required to prevent double-spending in a cryptocurrency. Modern blockchains support a variety of distributed applications beyond cryptocurrencies, which let users execute arbitrary code in a distributed and decentralized fashion. In this work we explore the synchronization requirements of a family of Ethereum smart contracts and formally establish the subsets of participants that need to synchronize their transactions. Moreover, a common requirement of all asynchronous consensus protocols is randomness. A simple and efficient approach is to employ threshold cryptography for this. However, this necessitates in practice a distributed setup protocol, often leading to performance bottlenecks. Blum (TCC 2020) propose a solution bypassing this requirement, which is, however, practically inefficient, due to the employment of fully homomorphic encryption. Recognizing that randomness for consensus does not need to be perfect (that is, always unpredictable and agreed-upon) we propose a practical and concretely-efficient protocol for randomness generation. Lastly, this thesis addresses the issue of deniability in distributed systems. The problem arises from the fact that a digital signature authenticates a message for an indefinite period. We introduce a scheme that allows the recipients to verify signatures, while allowing plausible deniability for signers. This scheme transforms a polynomial commitment scheme into a digital signature scheme

    Malware-Resistant Protocols for Real-World Systems

    Get PDF
    Cryptographic protocols are widely used to protect real-world systems from attacks. Paying for goods in a shop, withdrawing money or browsing the Web; all these activities are backed by cryptographic protocols. However, in recent years a potent threat became apparent. Malware is increasingly used in attacks to bypass existing security mechanisms. Many cryptographic protocols that are used in real-world systems today have been found to be susceptible to malware attacks. One reason for this is that most of these protocols were designed with respect to the Dolev-Yao attack model that assumes an attacker to control the network between computer systems but not the systems themselves. Furthermore, most real-world protocols do not provide a formal proof of security and thus lack a precise definition of the security goals the designers tried to achieve. This work tackles the design of cryptographic protocols that are resilient to malware attacks, applicable to real-world systems, and provably secure. In this regard, we investigate three real-world use cases: electronic payment, web authentication, and data aggregation. We analyze the security of existing protocols and confirm results from prior work that most protocols are not resilient to malware. Furthermore, we provide guidelines for the design of malware-resistant protocols and propose such protocols. In addition, we formalize security notions for malware-resistance and use a formal proof of security to verify the security guarantees of our protocols. In this work we show that designing malware-resistant protocols for real-world systems is possible. We present a new security notion for electronic payment and web authentication, called one-out-of-two security, that does not require a single device to be trusted and ensures that a protocol stays secure as long as one of two devices is not compromised. Furthermore, we propose L-Pay, a cryptographic protocol for paying at the point of sale (POS) or withdrawing money at an automated teller machine (ATM) satisfying one-out-of-two security, FIDO2 With Two Displays (FIDO2D) a cryptographic protocol to secure transactions in the Web with one-out-of-two security and Secure Aggregation Grouped by Multiple Attributes (SAGMA), a cryptographic protocol for secure data aggregation in encrypted databases. In this work, we take important steps towards the use of malware-resistant protocols in real-world systems. Our guidelines and protocols can serve as templates to design new cryptographic protocols and improve security in further use cases

    Consensual Resilient Control: Stateless Recovery of Stateful Controllers

    Get PDF
    Safety-critical systems have to absorb accidental and malicious faults to obtain high mean-times-to-failures (MTTFs). Traditionally, this is achieved through re-execution or replication. However, both techniques come with significant overheads, in particular when cold-start effects are considered. Such effects occur after replicas resume from checkpoints or from their initial state. This work aims at improving on the performance of control-task replication by leveraging an inherent stability of many plants to tolerate occasional control-task deadline misses and suggests masking faults just with a detection quorum. To make this possible, we have to eliminate cold-start effects to allow replicas to rejuvenate during each control cycle. We do so, by systematically turning stateful controllers into instants that can be recovered in a stateless manner. We highlight the mechanisms behind this transformation, how it achieves consensual resilient control, and demonstrate on the example of an inverted pendulum how accidental and maliciously-induced faults can be absorbed, even if control tasks run in less predictable environments
    • …
    corecore