10 research outputs found

    Auditable Restoration of Distributed Programs

    Full text link
    We focus on a protocol for auditable restoration of distributed systems. The need for such protocol arises due to conflicting requirements (e.g., access to the system should be restricted but emergency access should be provided). One can design such systems with a tamper detection approach (based on the intuition of "break the glass door"). However, in a distributed system, such tampering, which are denoted as auditable events, is visible only for a single node. This is unacceptable since the actions they take in these situations can be different than those in the normal mode. Moreover, eventually, the auditable event needs to be cleared so that system resumes the normal operation. With this motivation, in this paper, we present a protocol for auditable restoration, where any process can potentially identify an auditable event. Whenever a new auditable event occurs, the system must reach an "auditable state" where every process is aware of the auditable event. Only after the system reaches an auditable state, it can begin the operation of restoration. Although any process can observe an auditable event, we require that only "authorized" processes can begin the task of restoration. Moreover, these processes can begin the restoration only when the system is in an auditable state. Our protocol is self-stabilizing and has bounded state space. It can effectively handle the case where faults or auditable events occur during the restoration protocol. Moreover, it can be used to provide auditable restoration to other distributed protocol.Comment: 10 page

    Graceful Degradation and Related Fields

    Full text link
    When machine learning models encounter data which is out of the distribution on which they were trained they have a tendency to behave poorly, most prominently over-confidence in erroneous predictions. Such behaviours will have disastrous effects on real-world machine learning systems. In this field graceful degradation refers to the optimisation of model performance as it encounters this out-of-distribution data. This work presents a definition and discussion of graceful degradation and where it can be applied in deployed visual systems. Following this a survey of relevant areas is undertaken, novelly splitting the graceful degradation problem into active and passive approaches. In passive approaches, graceful degradation is handled and achieved by the model in a self-contained manner, in active approaches the model is updated upon encountering epistemic uncertainties. This work communicates the importance of the problem and aims to prompt the development of machine learning strategies that are aware of graceful degradation

    Fundamental concepts for fault tolerant systems

    Get PDF
    PhD ThesisIn order to be able to think clearly about any subject we need precise definitions of its basic terminology and concepts. If one reads the literature describing fault tolerant computing there is less agreement on fundamental models, concepts and terminology that would perhaps be expected. There are well established usages in particular subcommunities and many other individual workers take care to use terms carefully. Unfortunately there are also many papers in which terms are freely applied to concepts in an arbitrary and inconsistent way. This thesis attempts to bring together some of the concepts of fault tolerant computing and place them in a formal framework. The approach taken is to develop formal models of system structure and behaviour, and to define the basic concepts and terminology in terms of those models. The model of system structure is based on directed graphs and the model of behaviour is based on trace theor

    Building Secure and Reliable Deep Learning Systems from a Systems Security Perspective

    Get PDF
    As deep learning (DL) is becoming a key component in many business and safety-critical systems, such as self-driving cars or AI-assisted robotic surgery, adversaries have started placing them on their radar. To understand their potential threats, recent work studied the worst-case behaviors of deep neural networks (DNNs), such as mispredictions caused by adversarial examples or models altered by data poisoning attacks. However, most of the prior work narrowly considers DNNs as an isolated mathematical concept, and this perspective overlooks a holistic picture—leaving out the security threats that involve vulnerable interactions between DNNs and hardware or system-level components. In this dissertation, on three separate projects, I conduct a study on how DL systems, owing to the computational properties of DNNs, become particularly vulnerable to existing well-studied attacks. First, I study how over-parameterization hurts a system’s resilience to fault-injection attacks. Even with a single bit-flip, when chosen carefully, an attacker can inflict an accuracy drop up to 100%, and half of a DNN’s parameters have at least one bit that degrades its accuracy over 10%. An adversary who wields Rowhammer, a fault attack that flips random or targeted bits in the physical memory (DRAM), can exploit this graceless degradation in practice. Second, I study how computational regularities compromise the confidentiality of a system. Leveraging the information leaked by a DNN processing a single sample, an adversary can steal the DNN’s often proprietary architecture. An attacker armed with Flush+Reload, a remote side-channel attack, can accurately perform this reconstruction against a DNN deployed in the cloud. Third, I will show how input-adaptive DNNs, e.g., multi-exit networks, fail to promise computational efficiency in an adversarial setting. By adding imperceptible input perturbations, an attacker can significantly increase a multi-exit network’s computations to have predictions on an input. This vulnerability also leads to exploitation in resource-constrained settings such as an IoT scenario, where input-adaptive networks are gaining traction. Finally, building on the lessons learned from my projects, I conclude my dissertation by outlining future research directions for designing secure and reliable DL systems

    Eine kunden- und lebenszyklusorientierte Produktfamilienabsicherung fĂĽr die Automobilindustrie

    Get PDF
    In der vorliegenden Arbeit wird eine automotive-geeignete Absicherungsstrategie entwickelt, die erstmals den kompletten Variantenumfang eines massengefertigten eingebetteten Systems inklusive der Versionierung dessen Einzelkomponenten im Lebenszyklus systematisch betrachten und empirisch bewerten kann. Die Realisierung der Strategie bietet statistisch insbesondere Vorteile durch die Steigerung der vom Kunden wahrgenommenen Qualität aufgrund einer optimierten Absicherung von Systemvarianten

    Establishing Properties of Interaction Systems

    Full text link
    We exhibit sufficient conditions for generic properties of component based systems. The model we use to describe component based systems is the formalism of interaction systems. Because the state space explosion problem is encountered in interaction systems (i.e., an exploration of the state space gets unfeasible for a large number of components), we follow the guideline that these conditions have to be checkable efficiently (i.e., in time polynomial in the number of components). Further, the conditions are designed in such a way that the information gathered is reusable if a condition is not satisfied. Concretely, we consider deadlock-freedom and progress in interaction systems. We state a sufficient condition for deadlock-freedom that is based on an architectural constraint: We define what it means for an interaction system to be tree-like, and we derive a sufficient condition for deadlock-freedom of such systems. Considering progress, we first present a characterization of this property. Then we state a sufficient condition for progress which is based on a directed graph. We combine this condition with the characterization to point out one possibility to proceed if the graph-criterion does not yield progress. Both sufficient conditions can be checked efficiently because they only require the investigation of certain subsystems. Finally, we consider the effect that failure of some parts of the system has on deadlock-freedom and progress. We define robustness of deadlock-freedom respectively progress under failure, and we explain how the sufficient conditions above have to be adapted in order to be also applicable in this new situation

    Specifying graceful degradation in distributed systems

    No full text
    Distributed programs must often display graceful degradation, reacting adaptively to changes in the environment. Under ideal circumstances, the program’s behavior satisfies a set of application-dependent constraints. In the presence of failures, timing anomalies, or synchronization conflicts, however, certain constraints may become difficult or impossible to Satisfy, and the application designer may choose to relax them as long as the resulting behavior is sufficiently “close ” to the preferred behavior. This paper describes the relaxation lattice method, a new approach to specifying graceful degradation for a large class of highly-concurrent fault-tolerant distributed programs. A relaxation lattice is a lattice of specifications parameterized by a set of constraints, where the stronger the set of constraints, the more restrictive the specification. While a program is able to satisfy its strongest set of constraints, it satisfies its preferred specification, but if changes to the environment force it to satisfy a weaker set, then it will permit additional “weakly consistent ” computations which are undesired but tolerated. The use of relaxation lattices is illustrated by specifications for programs that tolerate (1) faults, such as site crashes and network partitions, (2) timing anomalies, such as attempting to read a value “too soon ” after it was written, and (3) synchronization conflicts, such as choosing the oldest “unlocked ” item from a queue. 1. Overview Distributed programs typically display more complex behavior than their single-site counterparts because they mUSt perform efficientfy and correctly in the presence of concurrency and failures. brten, such programs must display graceful degradation, reacting adaptively to changes in the environment. Under ideal circumstances, the program’s behavior satisfies a set of application-dependent preferred constraints. Each constraint typically preserves a certain level of consistency, an
    corecore