1,814 research outputs found

    On the Reliability Estimation of Stochastic Binary System

    Get PDF
    A stochastic binary system is a multi-component on-off system subject to random independent failures on its components. After potential failures, the state of the subsystem is ruled by a logical function (called structure function) that determines whether the system is operational or not. Stochastic binary systems (SBS) serve as a natural generalization of network reliability analysis, where the goal is to find the probability of correct operation of the system (in terms of connectivity, network diameter or different measures of success). A particular subclass of interest is stochastic monotone binary systems (SMBS), which are characterized by non-decreasing structure. We explore the combinatorics of SBS, which provide building blocks for system reliability estimation, looking at minimal non-operational subsystems, called mincuts. One key concept to understand the underlying combinatorics of SBS is duality. As methods for exact evaluation take exponential time, we discuss the use of Monte Carlo algorithms. In particular, we discuss the F-Monte Carlo method for estimating the reliability polynomial for homogeneous SBS, the Recursive Variance Reduction (RVR) for SMBS, which builds upon the efficient determination of mincuts, and three additional methods that combine in different ways the well--known techniques of Permutation Monte Carlo and Splitting. These last three methods are based on a stochastic process called Creation Process, a temporal evolution of the SBS which is static by definition. All the methods are compared using different topologies, showing large efficiency gains over the basic Monte Carlo scheme.Agencia Nacional de Investigación e InnovaciónMath-AMSU

    Static reliability and resilience in dynamic systems

    Get PDF
    Two systems are modeled in this thesis. First, we consider a multi-component stochastic monotone binary system, or SMBS for short. The reliability of an SMBS is the probability of correct operation. A statistical approximation of the system reliability is provided for these systems, inspired in Monte Carlo Methods. Then, we are focused on the diameter constrained reliability model (DCR), which was originally developed for delay sensitive applications over the Internet infrastructure. The computational complexity of the DCR is analyzed. Networks with an efficient (i.e., polynomial time) DCR computation are offered, termed Weak graphs. Second, we model the effect of a dynamic epidemic propagation. Our first approach is to develop a SIR-based simulation, where unrealistic assumptions for SIR model (infinite, homogeneous, fully-mixed population) are discarded. Finally, we formalize a stochastic rocess that counts infected individuals, and further investigate node-immunization strategies, subject to a budget nstraint. A combinatorial optimization problem is here introduced, called Graph Fragmentation Problem. There, the impact of a highly virulent epidemic propagation is analyzed, and we mathematically prove that Greedy heuristic is suboptimal

    A Decoding Approach to Fault Tolerant Control of Linear Systems with Quantized Disturbance Input

    Full text link
    The aim of this paper is to propose an alternative method to solve a Fault Tolerant Control problem. The model is a linear system affected by a disturbance term: this represents a large class of technological faulty processes. The goal is to make the system able to tolerate the undesired perturbation, i.e., to remove or at least reduce its negative effects; such a task is performed in three steps: the detection of the fault, its identification and the consequent process recovery. When the disturbance function is known to be \emph{quantized} over a finite number of levels, the detection can be successfully executed by a recursive \emph{decoding} algorithm, arising from Information and Coding Theory and suitably adapted to the control framework. This technique is analyzed and tested in a flight control issue; both theoretical considerations and simulations are reported

    Active planning for underwater inspection and the benefit of adaptivity

    Get PDF
    We discuss the problem of inspecting an underwater structure, such as a submerged ship hull, with an autonomous underwater vehicle (AUV). Unlike a large body of prior work, we focus on planning the views of the AUV to improve the quality of the inspection, rather than maximizing the accuracy of a given data stream. We formulate the inspection planning problem as an extension to Bayesian active learning, and we show connections to recent theoretical guarantees in this area. We rigorously analyze the benefit of adaptive re-planning for such problems, and we prove that the potential benefit of adaptivity can be reduced from an exponential to a constant factor by changing the problem from cost minimization with a constraint on information gain to variance reduction with a constraint on cost. Such analysis allows the use of robust, non-adaptive planning algorithms that perform competitively with adaptive algorithms. Based on our analysis, we propose a method for constructing 3D meshes from sonar-derived point clouds, and we introduce uncertainty modeling through non-parametric Bayesian regression. Finally, we demonstrate the benefit of active inspection planning using sonar data from ship hull inspections with the Bluefin-MIT Hovering AUV.United States. Office of Naval Research (ONR Grant N00014-09-1-0700)United States. Office of Naval Research (ONR Grant N00014-07-1-00738)National Science Foundation (U.S.) (NSF grant 0831728)National Science Foundation (U.S.) (NSF grant CCR-0120778)National Science Foundation (U.S.) (NSF grant CNS-1035866

    A Hostile model for network reliability analysis

    Get PDF
    In reliability analysis, the goal is to determine the probability of consistent operation of a system. We introduce the Hostile model, where the system under study is a network, and all the components may fail (both sites and links), except for a distinguished subset of sites, called terminals. The Hostile model includes the Classical Reliability model as a particular case. As a corollary, the exact reliability evaluation of a network in the Hostile model belongs to the list of N P-hard computational problems. Traditional methods for the classical reliability model such as Crude Monte Carlo, Importance Sampling and Recursive Variance Reduction are here adapted for the Hostile model. The performance of these methods is finally discussed using real-life networks

    Distributed Extra-gradient with Optimal Complexity and Communication Guarantees

    Full text link
    We consider monotone variational inequality (VI) problems in multi-GPU settings where multiple processors/workers/clients have access to local stochastic dual vectors. This setting includes a broad range of important problems from distributed convex minimization to min-max and games. Extra-gradient, which is a de facto algorithm for monotone VI problems, has not been designed to be communication-efficient. To this end, we propose a quantized generalized extra-gradient (Q-GenX), which is an unbiased and adaptive compression method tailored to solve VIs. We provide an adaptive step-size rule, which adapts to the respective noise profiles at hand and achieve a fast rate of O(1/T){\mathcal O}(1/T) under relative noise, and an order-optimal O(1/T){\mathcal O}(1/\sqrt{T}) under absolute noise and show distributed training accelerates convergence. Finally, we validate our theoretical results by providing real-world experiments and training generative adversarial networks on multiple GPUs.Comment: International Conference on Learning Representations (ICLR 2023
    corecore