16,557 research outputs found

    Fault-Tolerant Aggregation: Flow-Updating Meets Mass-Distribution

    Get PDF
    Flow-Updating (FU) is a fault-tolerant technique that has proved to be efficient in practice for the distributed computation of aggregate functions in communication networks where individual processors do not have access to global information. Previous distributed aggregation protocols, based on repeated sharing of input values (or mass) among processors, sometimes called Mass-Distribution (MD) protocols, are not resilient to communication failures (or message loss) because such failures yield a loss of mass. In this paper, we present a protocol which we call Mass-Distribution with Flow-Updating (MDFU). We obtain MDFU by applying FU techniques to classic MD. We analyze the convergence time of MDFU showing that stochastic message loss produces low overhead. This is the first convergence proof of an FU-based algorithm. We evaluate MDFU experimentally, comparing it with previous MD and FU protocols, and verifying the behavior predicted by the analysis. Finally, given that MDFU incurs a fixed deviation proportional to the message-loss rate, we adjust the accuracy of MDFU heuristically in a new protocol called MDFU with Linear Prediction (MDFU-LP). The evaluation shows that both MDFU and MDFU-LP behave very well in practice, even under high rates of message loss and even changing the input values dynamically.Comment: 18 pages, 5 figures, To appear in OPODIS 201

    A Common Protocol for Agent-Based Social Simulation

    Get PDF
    Traditional (i.e. analytical) modelling practices in the social sciences rely on a very well established, although implicit, methodological protocol, both with respect to the way models are presented and to the kinds of analysis that are performed. Unfortunately, computer-simulated models often lack such a reference to an accepted methodological standard. This is one of the main reasons for the scepticism among mainstream social scientists that results in low acceptance of papers with agent-based methodology in the top journals. We identify some methodological pitfalls that, according to us, are common in papers employing agent-based simulations, and propose appropriate solutions. We discuss each issue with reference to a general characterization of dynamic micro models, which encompasses both analytical and simulation models. In the way, we also clarify some confusing terminology. We then propose a three-stage process that could lead to the establishment of methodological standards in social and economic simulations.Agent-Based, Simulations, Methodology, Calibration, Validation, Sensitivity Analysis

    Orcutt’s Vision, 50 years on

    Get PDF
    Fifty years have passed since the seminal contribution of Guy Orcutt [Orcutt,1957],which gave birth to the field of Microsimulation. We survey, from a methodological perspective, the literature that followed, highlighting its relevance,its pros and cons vis-`a-vis other methodologies and pointing out the main open issues.

    Toward Specification-Guided Active Mars Exploration for Cooperative Robot Teams

    Get PDF
    As a step towards achieving autonomy in space exploration missions, we consider a cooperative robotics system consisting of a copter and a rover. The goal of the copter is to explore an unknown environment so as to maximize knowledge about a science mission expressed in linear temporal logic that is to be executed by the rover. We model environmental uncertainty as a belief space Markov decision process and formulate the problem as a two-step stochastic dynamic program that we solve in a way that leverages the decomposed nature of the overall system. We demonstrate in simulations that the robot team makes intelligent decisions in the face of uncertainty

    Simultaneous state and input estimation with partial information on the inputs

    Get PDF
    This paper investigates the problem of simultaneous state and input estimation for discrete-time linear stochastic systems when the information on the inputs is partially available. To incorporate the partial information on the inputs, matrix manipulation is used to obtain an equivalent system with reduced-order in puts. Then Bayesian inference is drawn to obtain a recursive filter for both state and input variables. The proposed filter is an extension of the recently developed state filter with partially observed inputs to the case where the input filter is also of in terest, and an extension of the Simultaneous State and Input Estimation (SSIE) to the case where the information on the inputs is partially available. A numerical example is given to illustrate the proposed method. It is shown that, due to the additional information on the inputs being incorporated in the filter design, the performances of both state and input estimation are substantially improved in comparison with the conventional SSIE without partial input information

    A Common Protocol for Agent-Based Social Simulation

    Get PDF
    Traditional (i.e. analytical) modelling practices in the social sciences rely on a very well established, although implicit, methodological protocol, both with respect to the way models are presented and to the kinds of analysis that are performed. Unfortunately, computer-simulated models often lack such a reference to an accepted methodological standard. This is one of the main reasons for the scepticism among mainstream social scientists that results in low acceptance of papers with agent-based methodology in the top journals. We identify some methodological pitfalls that, according to us, are common in papers employing agent-based simulations, and propose appropriate solutions. We discuss each issue with reference to a general characterization of dynamic micro models, which encompasses both analytical and simulation models. In the way, we also clarify some confusing terminology. We then propose a three-stage process that could lead to the establishment of methodological standards in social and economic simulations.Agent-based, simulations, methodology, calibration, validation.

    Fault estimation algorithms: design and verification

    Get PDF
    The research in this thesis is undertaken by observing that modern systems are becoming more and more complex and safety-critical due to the increasing requirements on system smartness and autonomy, and as a result health monitoring system needs to be developed to meet the requirements on system safety and reliability. The state-of-the-art approaches to monitoring system status are model based Fault Diagnosis (FD) systems, which can fuse the advantages of system physical modelling and sensors' characteristics. A number of model based FD approaches have been proposed. The conventional residual based approaches by monitoring system output estimation errors, however, may have certain limitations such as complex diagnosis logic for fault isolation, less sensitiveness to system faults and high computation load. More importantly, little attention has been paid to the problem of fault diagnosis system verification which answers the question that under what condition (i.e., level of uncertainties) a fault diagnosis system is valid. To this end, this thesis investigates the design and verification of fault diagnosis algorithms. It first highlights the differences between two popular FD approaches (i.e., residual based and fault estimation based) through a case study. On this basis, a set of uncertainty estimation algorithms are proposed to generate fault estimates according to different specifications after interpreting the FD problem as an uncertainty estimation problem. Then FD algorithm verification and threshold selection are investigated considering that there are always some mismatches between the real plant and the mathematical model used for FD observer design. Reachability analysis is drawn to evaluate the effect of uncertainties and faults such that it can be quantitatively verified under what condition a FD algorithm is valid. First the proposed fault estimation algorithms in this thesis, on the one hand, extend the existing approaches by pooling the available prior information such that performance can be enhanced, and on the other hand relax the existence condition and reduce the computation load by exploiting the reduced order observer structure. Second, the proposed framework for fault diagnosis system verification bridges the gap between academia and industry since on the one hand a given FD algorithm can be verified under what condition it is effective, and on the other hand different FD algorithms can be compared and selected for different application scenarios. It should be highlighted that although the algorithm design and verification are for fault diagnosis systems, they can also be applied for other systems such as disturbance rejection control system among many others
    • …
    corecore