1,133 research outputs found

    Fault-tolerant supervisory control of discrete-event systems

    Get PDF
    In this dissertation, I introduce my study on fault-tolerant supervisory control of discrete event systems. Given a plant, possessing both faulty and nonfaulty behavior, and a submodel for just the nonfaulty part, the goal of fault-tolerant supervisory control is to enforce a certain specifcation for the nonfaulty plant and another (perhaps more liberal) specifcation for the overall plant, and further to ensure that the plant recovers from any fault within a bounded delay so that following the recovery the system state is equivalent to a nonfaulty state (as if no fault ever happened). My research includes the formulation of the notations and the problem, existence conditions, synthesizing algorithms, and applications

    Optimistic Adaptation of Decentralised Role-based Software Systems

    Get PDF
    The complexity of computer networks has been rising over the last decades. Increasing interconnectivity between multiple devices, growing complexity of performed tasks and a strong collaboration between nodes are drivers for this phenomenon. An example is represented by Internet-of-Things devices, whose relevance has been rising in recent years. The increasing number of devices requiring updates and supervision makes maintenance more difficult. Human interaction, in this case, is costly and requires a lot of time. To overcome this, self-adaptive software systems (SAS) can be used. SAS are a subset of autonomous systems which can monitor themselves and their environment to adapt to changes without human interaction. In the literature, different approaches for engineering SAS were proposed, including techniques for executing adaptations on multiple devices based on generated plans for reacting to changes. Among those solutions, also decentralised approaches can be found. To the best of our knowledge, no approach for engineering a SAS exists which tolerates errors during the execution of adaptation in a decentralised setting. While some approaches for role-based execution reset the application in case of a single failure during the adaptation process, others do not make assumptions about errors or do not consider an erroneous environment. In a real-world environment, errors will likely occur during run-time, and the adaptation process could be disturbed. This work aims to perform adaptations in a decentralised way on role-based systems with a relaxed consistency constraint, i.e., errors during the adaptation phase are tolerated. This increases the availability of nodes since no rollbacks are required in case of a failure. Moreover, a subset of applications, such as drone swarms, would benefit from an approach with a relaxed consistency model since parts of the system that adapted successfully can already operate in an adapted configuration instead of waiting for other peers to apply the changes in a later iteration. Moreover, if we eliminate the need for an atomic adaptation execution, asynchronous execution of adaptation would be possible. In that case, we can supervise the adaptation process for a long time and ensure that every peer takes the planned actions as soon as the internal task execution allows it. To allow for a relaxed consistent way of adaptation execution, we develop a decentralised adaptation execution protocol, which supports the notion of eventual consistency. As soon as devices reconnect after network congestion or restore their internal state after local failures, our protocol can coordinate the recovery process among multiple devices to attempt recovery of a globally consistent state after errors occur. By superseding the need for a central instance, every peer who received information about failing peers can start the recovery process. The developed approach can restore a consistent global configuration if almost all peers fail. Moreover, the approach supports asynchronous adaptations, i.e., the peers can execute planned adaptations as soon as they are ready, which increases overall availability in case of delayed adaptation of single nodes. The developed protocol is evaluated with the help of a proof-of-concept implementation. The approach was run in five different experiments with thousands of iterations to show the applicability and reliability of this novel approach. The time for execution of the protocol and the number of exchanged messages has been measured to compare the protocol for different error cases and system sizes, as well as to show the scalability of the approach. The developed solution has been compared to a blocking approach to show the feasibility compared to an atomic approach. The applicability in a real-world scenario has been described in an empirical study using an example of a fire-extinguishing drone swarm. The results show that an optimistic approach to adaptation is suitable and specific scenarios can benefit from the improved availability since no rollbacks are required. Systems can continue their work regardless of the failures of participating nodes in large-scale systems.:Abstract VI 1. Introduction 1 1.1. Motivational Use-Case 2 1.2. Problem Definition 3 1.3. Objectives 4 1.4. Research Questions 5 1.5. Contributions 5 1.6. Outline 6 2. Foundation 7 2.1. Role Concept 7 2.2. Self-Adaptive Software Systems 13 2.3. Terminology for Role-Based Self-Adaptation 15 2.4. Consistency Preservation and Consistency Models 17 2.5. Summary 20 3. Related Work 21 3.1. Role-Based Approaches 22 3.2. Actor Model of Computation and Akka 23 3.3. Adaptation Execution in Self-Adaptive Software Systems 24 3.4. Change Consistency in Distributed Systems 33 3.5. Comparison of the Evaluated Approaches 40 4. The Decentralised Consistency Compensation Protocol 43 4.1. System and Error Model 43 4.2. Requirements to the Concept 44 4.3. The Usage of Roles in Adaptations 45 4.4. Protocol Overview 47 4.5. Protocol Description 51 4.6. Protocol Corner- and Error Cases 64 4.7. Summary 66 5. Prototypical Implementation 67 5.1. Technology Overview 67 5.2. Reused Artifacts 68 5.3. Implementation Details 70 5.4. Setup of the Prototypical Implementation 76 5.5. Summary 77 6. Evaluation 79 6.1. Evaluation Methodology 79 6.2. Evaluation Setup 80 6.3. Experiment Overview 81 6.4. Default Case: Successful Adaptation 84 6.5. Compensation on Disconnection of Peers 85 6.6. Recovery from Failed Adaptation 88 6.7. Impact of Early Activation of Adaptations 91 6.8. Comparison with a Blocking Approach 92 6.9. Empirical Study: Fire Extinguishing Drones 95 6.10. Summary 97 7. Conclusion and Future Work 99 7.1. Recap of the Research Questions 99 7.2. Discussion 101 7.3. Future Work 101 A. Protocol Buffer Definition 103 Acronyms 108 Bibliography 10

    Habits Over Routines: Remarks on Control Room Practices and Control Room Studies

    Get PDF
    The evolution of computer tools has had profound impacts on many aspects of control rooms and control room studies. In this paper, we discuss some key assumptions underpinning these studies based on a new case of the electricity distribution control rooms, where the reliability of the electricity infrastructure is managed by a combination of planning and real-time maintenance. Some of these practices have changed remarkably little – partially because they have been considered to have been ‘digitalized’ since the 1950s and have continued to amass digital solutions from different periods. Hence, the gradual transformation of control room work demands nuanced attention, both conceptual and empirical. To outline a framework for this work, we provide a conceptualization of organizational routines, habits, and reflectivity and synthesize existing CSCW and control room literature. We then present an empirical study that demonstrates our concepts and shows how they can be applied to study cooperative work. By addressing these aims the paper complements, and advances, the important topics recognized in this special theme issue and hence develops new research openings in CSCW. We address the necessity to avoid implicit determinism when analyzing new digital support tools and suggest focusing on how working habits mediate social changes, distribution, and decentralization in representing the power distribution in control rooms

    Robust health stream processing

    Get PDF
    2014 Fall.Includes bibliographical references.As the cost of personal health sensors decrease along with improvements in battery life and connectivity, it becomes more feasible to allow patients to leave full-time care environments sooner. Such devices could lead to greater independence for the elderly, as well as for others who would normally require full-time care. It would also allow surgery patients to spend less time in the hospital, both pre- and post-operation, as all data could be gathered via remote sensors in the patients home. While sensor technology is rapidly approaching the point where this is a feasible option, we still lack in processing frameworks which would make such a leap not only feasible but safe. This work focuses on developing a framework which is robust to both failures of processing elements as well as interference from other computations processing health sensor data. We work with 3 disparate data streams and accompanying computations: electroencephalogram (EEG) data gathered for a brain-computer interface (BCI) application, electrocardiogram (ECG) data gathered for arrhythmia detection, and thorax data gathered from monitoring patient sleep status

    Automated highway systems : platoons of vehicles viewed as a multiagent system

    Get PDF
    Tableau d'honneur de la FacultĂ© des Ă©tudes supĂ©rieures et postdoctorales, 2005-2006La conduite collaborative est un domaine liĂ© aux systĂšmes de transport intelligents, qui utilise les communications pour guider de façon autonome des vĂ©hicules coopĂ©ratifs sur une autoroute automatisĂ©e. Depuis les derniĂšres annĂ©es, diffĂ©rentes architectures de vĂ©hicules automatisĂ©s ont Ă©tĂ© proposĂ©es, mais la plupart d’entre elles n’ont pas, ou presque pas, attaquĂ© le problĂšme de communication inter vĂ©hicules. À l’intĂ©rieur de ce mĂ©moire, nous nous attaquons au problĂšme de la conduite collaborative en utilisant un peloton de voitures conduites par des agents logiciels plus ou moins autonomes, interagissant dans un mĂȘme environnement multi-agents: une autoroute automatisĂ©e. Pour ce faire, nous proposons une architecture hiĂ©rarchique d’agents conducteurs de voitures, se basant sur trois couches (couche de guidance, couche de management et couche de contrĂŽle du trafic). Cette architecture peut ĂȘtre utilisĂ©e pour dĂ©velopper un peloton centralisĂ©, oĂč un agent conducteur de tĂȘte coordonne les autres avec des rĂšgles strictes, et un peloton dĂ©centralisĂ©, oĂč le peloton est vu comme une Ă©quipe d’agents conducteurs ayant le mĂȘme niveau d’autonomie et essayant de maintenir le peloton stable.Collaborative driving is a growing domain of Intelligent Transportation Systems (ITS) that makes use of communications to autonomously guide cooperative vehicles on an Automated Highway System (AHS). For the past decade, different architectures of automated vehicles have been proposed, but most of them did not or barely addressed the inter-vehicle communication problem. In this thesis, we address the collaborative driving problem by using a platoon of cars driven by more or less autonomous software agents interacting in a Multiagent System (MAS) environment: the automated highway. To achieve this, we propose a hierarchical driving agent architecture based on three layers (guidance layer, management layer and traffic control layer). This architecture can be used to develop centralized platoons, where the driving agent of the head vehicle coordinates other driving agents by applying strict rules, and decentralized platoons, where the platoon is considered as a team of driving agents with a similar degree of autonomy, trying to maintain a stable platoon

    Data Mining in Smart Grids

    Get PDF
    Effective smart grid operation requires rapid decisions in a data-rich, but information-limited, environment. In this context, grid sensor data-streaming cannot provide the system operators with the necessary information to act on in the time frames necessary to minimize the impact of the disturbances. Even if there are fast models that can convert the data into information, the smart grid operator must deal with the challenge of not having a full understanding of the context of the information, and, therefore, the information content cannot be used with any high degree of confidence. To address this issue, data mining has been recognized as the most promising enabling technology for improving decision-making processes, providing the right information at the right moment to the right decision-maker. This Special Issue is focused on emerging methodologies for data mining in smart grids. In this area, it addresses many relevant topics, ranging from methods for uncertainty management, to advanced dispatching. This Special Issue not only focuses on methodological breakthroughs and roadmaps in implementing the methodology, but also presents the much-needed sharing of the best practices. Topics include, but are not limited to, the following: Fuzziness in smart grids computing Emerging techniques for renewable energy forecasting Robust and proactive solution of optimal smart grids operation Fuzzy-based smart grids monitoring and control frameworks Granular computing for uncertainty management in smart grids Self-organizing and decentralized paradigms for information processin

    Driverless Finance

    Get PDF
    While safety concerns are at the forefront of the debate about driverless cars, such concerns seem to be less salient when it comes to the increasingly sophisticated algorithms driving the financial system. This Article argues, however, that a precautionary approach to sophisticated financial algorithms is justified by the potential enormity of the social costs of financial collapse. Using the algorithm-driven fintech business models of robo-investing, marketplace lending, high frequency trading and token offerings as case studies, this Article illustrates how increasingly sophisticated algorithms (particularly those capable of machine learning) can exponentially exacerbate complexity, speed and correlation within the financial system, rendering the system more fragile. This Article also explores how such algorithms may undermine some of the regulatory reforms implemented in the wake of the Financial Crisis to make the financial system more robust. Through its analysis, this Article demonstrates that the algorithmic automation of finance (a phenomenon I refer to as “driverless finance”) deserves close attention from a financial stability perspective. This Article argues that regulators should become involved with the processes by which the relevant algorithms are being created, and that such efforts should begin immediately – while the technology is still in its infancy and remains somewhat susceptible to regulatory influence

    A Lebesgue Sampling based Diagnosis and Prognosis Methodology with Application to Lithium-ion Batteries

    Get PDF
    Fault diagnosis and prognosis (FDP) plays an important role in the modern complex industrial systems to maintain their reliability, safety, and availability. Diagnosis aims to monitor the fault state of the component or the system in real-time. Prognosis refers to the generation of long-term predictions that describe the evolution of a fault and the estimation of the remaining useful life (RUL) of a failing component or subsystem. Traditional Riemann sampling-based FDP (RS-FDP) takes samples and executes algorithms in periodic time intervals and, in most cases, requires significant computational resources. This makes it difficult or even impossible to implement RS-FDP algorithms on hardware with very limited computational capabilities, such as embedded systems that are widely used in industries. To overcome this bottleneck, this proposal develops a novel Lebesgue sampling-based FDP (LS-FDP), in which FDP algorithms are implemented “as-neede”. Different from RS-FDP, LS-FDP divides the state axis by a number of predefined states (also called Lebesgue states). The computation of LS-based diagnosis is triggered only when the value of measurements changes from one Lebesgue state to another, or “event-triggered”. This method significantly reduces the computation demands by eliminating unnecessary computation. This LS-FDP design is generic and able to accommodate different algorithms, such as Kalman filter and its variations, particle filter, relevant vector machine, etc. This proposal first develops a particle filtering based LS-FDP for li-ion battery applications. To improve the accuracy and precision of the diagnosis and prognosis results, the parameters in the models are treated as time-varying ones and adjusted online by a recursive least square (RLS) method to accommodate the changing of dynamics, operation condition, and environment in the real cases. Uncertainty management is studied in LS-FDP to handle the uncertainties from inaccurate model structure and parameter, measurement noise, process noise, and unknown future loading. The extended Kalman filter implemented in the framework of LS-FDP yields a more efficient LS-EKF algorithm. The proposed method takes full advantage of EKF and Lebesgue sampling to alleviate computation requirements and make it possible to be deployed on most of the distributed FDP systems. All the proposed methods are verified by a study with the estimation of the state of health and RUL prediction of Lithium-ion batteries. The comparisons between traditional RS-FDP methods and LS-FDP show that LS-FDP has a much lower requirement on the computational resource. The proposed parameter adaptation and uncertainty management methods can produce more accurate and precise diagnostic and prognostic results. This research opens a new chapter for FDP method and make it easier to deploy FDP algorithms on the complicate systems build by embedded subsystem and micro-controllers with limited computational resources and communication band width
    • 

    corecore