30,359 research outputs found

    Bi-objective modeling approach for repairing multiple feature infrastructure systems

    Get PDF
    A bi-objective decision aid model for planning long-term maintenance of infrastructure systems is presented, oriented to interventions on their constituent elements, with two upgrade levels possible for each element (partial/full repairs). The model aims at maximizing benefits and minimizing costs, and its novelty is taking into consideration, and combining, the system/element structure, volume discounts, and socioeconomic factors. The model is tested with field data from 229 sidewalks (systems) and compared to two simpler repair policies, of allowing only partial or full repairs. Results show that the efficiency gains are greater in the lower mid-range budget region. The proposed modeling approach is an innovative tool to optimize cost/benefits for the various repair options and analyze the respective trade-offs.info:eu-repo/semantics/publishedVersio

    A novel planning approach for the water, sanitation and hygiene (WaSH) sector: the use of object-oriented bayesian networks

    Get PDF
    Conventional approaches to design and plan water, sanitation, and hygiene (WaSH) interventions are not suitable for capturing the increasing complexity of the context in which these services are delivered. Multidimensional tools are needed to unravel the links between access to basic services and the socio-economic drivers of poverty. This paper applies an object-oriented Bayesian network to reflect the main issues that determine access to WaSH services. A national Program in Kenya has been analyzed as initial case study. The main findings suggest that the proposed approach is able to accommodate local conditions and to represent an accurate reflection of the complexities of WaSH issues, incorporating the uncertainty intrinsic to service delivery processes. Results indicate those areas in which policy makers should prioritize efforts and resources. Similarly, the study shows the effects of sector interventions, as well as the foreseen impact of various scenarios related to the national Program.Preprin

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    Abridged Petri Nets

    Full text link
    A new graphical framework, Abridged Petri Nets (APNs) is introduced for bottom-up modeling of complex stochastic systems. APNs are similar to Stochastic Petri Nets (SPNs) in as much as they both rely on component-based representation of system state space, in contrast to Markov chains that explicitly model the states of an entire system. In both frameworks, so-called tokens (denoted as small circles) represent individual entities comprising the system; however, SPN graphs contain two distinct types of nodes (called places and transitions) with transitions serving the purpose of routing tokens among places. As a result, a pair of place nodes in SPNs can be linked to each other only via a transient stop, a transition node. In contrast, APN graphs link place nodes directly by arcs (transitions), similar to state space diagrams for Markov chains, and separate transition nodes are not needed. Tokens in APN are distinct and have labels that can assume both discrete values ("colors") and continuous values ("ages"), both of which can change during simulation. Component interactions are modeled in APNs using triggers, which are either inhibitors or enablers (the inhibitors' opposites). Hierarchical construction of APNs rely on using stacks (layers) of submodels with automatically matching color policies. As a result, APNs provide at least the same modeling power as SPNs, but, as demonstrated by means of several examples, the resulting models are often more compact and transparent, therefore facilitating more efficient performance evaluation of complex systems.Comment: 17 figure

    Supervisory Control for Behavior Composition

    Full text link
    We relate behavior composition, a synthesis task studied in AI, to supervisory control theory from the discrete event systems field. In particular, we show that realizing (i.e., implementing) a target behavior module (e.g., a house surveillance system) by suitably coordinating a collection of available behaviors (e.g., automatic blinds, doors, lights, cameras, etc.) amounts to imposing a supervisor onto a special discrete event system. Such a link allows us to leverage on the solid foundations and extensive work on discrete event systems, including borrowing tools and ideas from that field. As evidence of that we show how simple it is to introduce preferences in the mapped framework

    Reactive point processes: A new approach to predicting power failures in underground electrical systems

    Full text link
    Reactive point processes (RPPs) are a new statistical model designed for predicting discrete events in time based on past history. RPPs were developed to handle an important problem within the domain of electrical grid reliability: short-term prediction of electrical grid failures ("manhole events"), including outages, fires, explosions and smoking manholes, which can cause threats to public safety and reliability of electrical service in cities. RPPs incorporate self-exciting, self-regulating and saturating components. The self-excitement occurs as a result of a past event, which causes a temporary rise in vulner ability to future events. The self-regulation occurs as a result of an external inspection which temporarily lowers vulnerability to future events. RPPs can saturate when too many events or inspections occur close together, which ensures that the probability of an event stays within a realistic range. Two of the operational challenges for power companies are (i) making continuous-time failure predictions, and (ii) cost/benefit analysis for decision making and proactive maintenance. RPPs are naturally suited for handling both of these challenges. We use the model to predict power-grid failures in Manhattan over a short-term horizon, and to provide a cost/benefit analysis of different proactive maintenance programs.Comment: Published at http://dx.doi.org/10.1214/14-AOAS789 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    An approximate approach for the joint problem of level of repair analysis and spare parts stocking

    Get PDF
    For the spare parts stocking problem, generally METRIC type methods are used in the context of capital goods. A decision is assumed on which components to discard and which to repair upon failure, and where to perform repairs. In the military world, this decision is taken explicitly using the level of repair analysis (LORA). Since the LORA does not consider the availability of the capital goods, solving the LORA and spare parts stocking problems sequentially may lead to suboptimal solutions. Therefore, we propose an iterative algorithm. We compare its performance with that of the sequential approach and a recently proposed, so-called integrated algorithm that finds optimal solutions for twoechelon, single-indenture problems. On a set of such problems, the iterative algorithm turns out to be close to optimal. On a set of multi-echelon, multi-indenture problems, the iterative approach achieves a cost reduction of 3%on average (35%at maximum) as compared to the sequential approach. Its costs are only 0.6 % more than those of the integrated algorithm on average (5 % at maximum). Considering that the integrated algorithm may take a long time without guaranteeing optimality, we believe that the iterative algorithm is a good approach. This result is further strengthened in a case study, which has convinced Thales Nederland to start using the principles behind our algorithm
    corecore