888 research outputs found

    Building Reliable Budget-Based Binary-State Networks

    Full text link
    Everyday life is driven by various network, such as supply chains for distributing raw materials, semi-finished product goods, and final products; Internet of Things (IoT) for connecting and exchanging data; utility networks for transmitting fuel, power, water, electricity, and 4G/5G; and social networks for sharing information and connections. The binary-state network is a basic network, where the state of each component is either success or failure, i.e., the binary-state. Network reliability plays an important role in evaluating the performance of network planning, design, and management. Because more networks are being set up in the real world currently, there is a need for their reliability. It is necessary to build a reliable network within a limited budget. However, existing studies are focused on the budget limit for each minimal path (MP) in networks without considering the total budget of the entire network. We propose a novel concept to consider how to build a more reliable binary-state network under the budget limit. In addition, we propose an algorithm based on the binary-addition-tree algorithm (BAT) and stepwise vectors to solve the problem efficiently

    A hybrid load flow and event driven simulation approach to multi-state system reliability evaluation

    Get PDF
    Structural complexity of systems, coupled with their multi-state characteristics, renders their reliability and availability evaluation difficult. Notwithstanding the emergence of various techniques dedicated to complex multi-state system analysis, simulation remains the only approach applicable to realistic systems. However, most simulation algorithms are either system specific or limited to simple systems since they require enumerating all possible system states, defining the cut-sets associated with each state and monitoring their occurrence. In addition to being extremely tedious for large complex systems, state enumeration and cut-set definition require a detailed understanding of the system׳s failure mechanism. In this paper, a simple and generally applicable simulation approach, enhanced for multi-state systems of any topology is presented. Here, each component is defined as a Semi-Markov stochastic process and via discrete-event simulation, the operation of the system is mimicked. The principles of flow conservation are invoked to determine flow across the system for every performance level change of its components using the interior-point algorithm. This eliminates the need for cut-set definition and overcomes the limitations of existing techniques. The methodology can also be exploited to account for effects of transmission efficiency and loading restrictions of components on system reliability and performance. The principles and algorithms developed are applied to two numerical examples to demonstrate their applicability

    Decision Diagram Based Symbolic Algorithm for Evaluating the Reliability of a Multistate Flow Network

    Get PDF
    Evaluating the reliability of Multistate Flow Network (MFN) is an NP-hard problem. Ordered binary decision diagram (OBDD) or variants thereof, such as multivalued decision diagram (MDD), are compact and efficient data structures suitable for dealing with large-scale problems. Two symbolic algorithms for evaluating the reliability of MFN, MFN_OBDD and MFN_MDD, are proposed in this paper. In the algorithms, several operating functions are defined to prune the generated decision diagrams. Thereby the state space of capacity combinations is further compressed and the operational complexity of the decision diagrams is further reduced. Meanwhile, the related theoretical proofs and complexity analysis are carried out. Experimental results show the following: (1) compared to the existing decomposition algorithm, the proposed algorithms take less memory space and fewer loops. (2) The number of nodes and the number of variables of MDD generated in MFN_MDD algorithm are much smaller than those of OBDD built in the MFN_OBDD algorithm. (3) In two cases with the same number of arcs, the proposed algorithms are more suitable for calculating the reliability of sparse networks

    Cyber risk : an analysis of self-protection and the prediction of claims

    Get PDF
    For a set of Brazilian companies, we study the occurrence of cyber risk claims by analyzing the impact of self protection and the prediction of their occurrence. We bring a new perspective to the study of cyber risk analyzing the probabilities of acquiring protection against this type of risk by using propensity scores. We consider the problem of whether acquiring cyber protection improves network security using a matching method that allows a fair comparison among companies with similar characteristics. Our analysis, assisted with Brazilian data, shows that despite informal arguments that favor self-protection against cyber risks as a tool to improve network security, we observed that in the presence of self-protection against cyber risks, the incidence of claims is higher than if there were no protection. Regarding the prediction of the occurrence of a claim, a system considering a feedforward multilayer perceptron neural network was created, and its performance was measured. Our results show that, when applied to the relevant information of the companies under study, it presents a very good performance, reaching an eciency in general classication above 85%. The fact is that the use of neural networks can be quite opportune to help in solving the problem presented.info:eu-repo/semantics/publishedVersio

    Efficient availability assessment of reconfigurable complex multi-state systems with interdependencies

    Get PDF
    Complex topology, multi-state behaviour, component interdependencies and interactions with external phenomena are prominent attributes of many realistic systems. Analytical reliability evaluation techniques have limited applicability to such systems and efficient simulation models are therefore required. In this paper, we present a simulation framework to simplify the availability assessment of these systems. It allows tracking of changes in performance levels of components from which system performance is deduced by solving a set of flow equations. This framework is adapted to the availability modelling of an offshore plant with interdependencies, operated in the presence of limited maintenance teams and operational loops. The underlying principles of the approach are based on an extension of the load-flow simulation presented recently by the current authors (George-Williams & Patelli 2016)

    Stochastic pump effect and geometric phases in dissipative and stochastic systems

    Full text link
    The success of Berry phases in quantum mechanics stimulated the study of similar phenomena in other areas of physics, including the theory of living cell locomotion and motion of patterns in nonlinear media. More recently, geometric phases have been applied to systems operating in a strongly stochastic environment, such as molecular motors. We discuss such geometric effects in purely classical dissipative stochastic systems and their role in the theory of the stochastic pump effect (SPE).Comment: Review. 35 pages. J. Phys. A: Math, Theor. (in press

    How important are activation functions in regression and classification? A survey, performance comparison, and future directions

    Full text link
    Inspired by biological neurons, the activation functions play an essential part in the learning process of any artificial neural network commonly used in many real-world problems. Various activation functions have been proposed in the literature for classification as well as regression tasks. In this work, we survey the activation functions that have been employed in the past as well as the current state-of-the-art. In particular, we present various developments in activation functions over the years and the advantages as well as disadvantages or limitations of these activation functions. We also discuss classical (fixed) activation functions, including rectifier units, and adaptive activation functions. In addition to discussing the taxonomy of activation functions based on characterization, a taxonomy of activation functions based on applications is presented. To this end, the systematic comparison of various fixed and adaptive activation functions is performed for classification data sets such as the MNIST, CIFAR-10, and CIFAR- 100. In recent years, a physics-informed machine learning framework has emerged for solving problems related to scientific computations. For this purpose, we also discuss various requirements for activation functions that have been used in the physics-informed machine learning framework. Furthermore, various comparisons are made among different fixed and adaptive activation functions using various machine learning libraries such as TensorFlow, Pytorch, and JAX.Comment: 28 pages, 15 figure

    In Silico Sequence Optimization for the Reproducible Generation of DNA Structures

    Get PDF
    Biologically, deoxyribonucleic acid (DNA) molecules have been used for information storage for more than 3 billion years. Today, modern synthesis tools have made it possible to use synthetic DNA molecules as a material for engineering nanoscale structures. These self-assembling structures are capable of both resolutions as fine as 4 angstroms and executing programed dynamic behavior. Numerous approaches for creating structures from DNA have been proposed and validated, however it remains commonplace for engineered systems to exhibit unexpected behaviors such as low formation yields, poor performance, or total failure. It is plausible that at least some of these behaviors arise due to the formation of non-target structures, but how to quantify and avoid these interfering structures remains a critical question. To evaluate the impacts of non-target structures on system behavior, three co-dependent scientific developments were necessary. First, three new optimization criteria for quantifying system quality were proposed and studied. This led to the discovery that relatively small intramolecular structures lead to surprisingly large deviations in system behavior such as reaction kinetics. Second, a new heuristic algorithm for generating high quality systems was developed. This algorithm enabled the experimental characterization of newly generated systems, thus validating the optimization criteria and confirming the finding that almost all kinetic variation can be explained by non-target intramolecular structures. Finally, these studies necessitated the creation of two new software tools; one for analyzing existing DNA systems (the “Device Profiler” software) and another for generating fit DNA systems (the “Sequence Evolver” software). In order to enable these tools to handle the size and complexity of state-of-the-art systems, it was necessary to invent efficient software implementations of the metrics and algorithm. The performance of the software was benchmarked against several alternative tools in use by the DNA nanotechnology community, with the results indicating a marked improvement in system quality over current state-of-the-art methods. Ultimately, the new optimization criteria, heuristic algorithm, and software cooperatively enabled an improved method for generating DNA systems with kinetically uniform behaviors
    • …
    corecore