4,031 research outputs found

    On green routing and scheduling problem

    Full text link
    The vehicle routing and scheduling problem has been studied with much interest within the last four decades. In this paper, some of the existing literature dealing with routing and scheduling problems with environmental issues is reviewed, and a description is provided of the problems that have been investigated and how they are treated using combinatorial optimization tools

    Integration of cost-risk assessment of denial of service within an intelligent maintenance system

    Get PDF
    As organisations become richer in data the function of asset management will have to increasingly use intelligent systems to control condition monitoring systems and organise maintenance. In the future the UK rail industry is anticipating having to optimize capacity by running trains closer to each other. In this situation maintenance becomes extremely problematic as within such a high-performance network a relatively minor fault will impact more trains and passengers; such denial of service causes reputational damage for the industry and causes fines to be levied against the infrastructure owner, Network Rail. Intelligent systems used to control condition monitoring systems will need to optimize for several factors; optimization for minimizing denial of service will be one such factor. With schedules anticipated to be increasingly complicated detailed estimation methods will be extremely difficult to implement. Cost prediction of maintenance activities tend to be expert driven and require extensive details, making automation of such an activity difficult. Therefore a stochastic process will be needed to approach the problem of predicting the denial of service arising from any required maintenance. Good uncertainty modelling will help to increase the confidence of estimates. This paper seeks to detail the challenges that the UK Railway industry face with regards to cost modelling of maintenance activities and outline an example of a suitable cost model for quantifying cost uncertainty. The proposed uncertainty quantification is based on historical cost data and interpretation of its statistical distributions. These estimates are then integrated in a cost model to obtain accurate uncertainty measurements of outputs through Monte-Carlo simulation methods. An additional criteria of the model was that it be suitable for integration into an existing prototype integrated intelligent maintenance system. It is anticipated that applying an integrated maintenance management system will apply significant downward pressure on maintenance budgets and reduce denial of service. Accurate cost estimation is therefore of great importance if anticipated cost efficiencies are to be achieved. While the rail industry has been the focus of this work, other industries have been considered and it is anticipated that the approach will be applicable to many other organisations across several asset management intensive industrie

    Survivable algorithms and redundancy management in NASA's distributed computing systems

    Get PDF
    The design of survivable algorithms requires a solid foundation for executing them. While hardware techniques for fault-tolerant computing are relatively well understood, fault-tolerant operating systems, as well as fault-tolerant applications (survivable algorithms), are, by contrast, little understood, and much more work in this field is required. We outline some of our work that contributes to the foundation of ultrareliable operating systems and fault-tolerant algorithm design. We introduce our consensus-based framework for fault-tolerant system design. This is followed by a description of a hierarchical partitioning method for efficient consensus. A scheduler for redundancy management is introduced, and application-specific fault tolerance is described. We give an overview of our hybrid algorithm technique, which is an alternative to the formal approach given

    Lazy Model Expansion: Interleaving Grounding with Search

    Full text link
    Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowledge representation languages, like ASP, FO(.) and Zinc, is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory. An important bottleneck is the blowup of the size of the theory caused by the reduction phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(.) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part, the justifications provide a recipe to construct a complete assignment that satisfies the non-grounded part. When a justification for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justified. The theoretical framework captures existing approaches for tackling the grounding bottleneck such as lazy clause generation and grounding-on-the-fly, and presents a generalization of the 2-watched literal scheme. We present an algorithm for lazy model expansion and integrate it in a model generator for FO(ID), a language extending first-order logic with inductive definitions. The algorithm is implemented as part of the state-of-the-art FO(ID) Knowledge-Base System IDP. Experimental results illustrate the power and generality of the approach

    Extensible Automated Constraint Modelling

    Get PDF
    In constraint solving, a critical bottleneck is the formulationof an effective constraint model of a given problem. The CONJURE system described in this paper, a substantial step forward over prototype versions of CONJURE previously reported, makes a valuable contribution to the automation of constraint modelling by automatically producing constraint models from their specifications in the abstract constraint specification language ESSENCE. A set of rules is used to refine an abstract specification into a concrete constraint model. We demonstrate that this set of rules is readily extensible to increase the space of possible constraint models CONJURE can produce. Our empirical results confirm that CONJURE can reproduce successfully the kernels of the constraint models of 32 benchmark problems found in the literature
    • …
    corecore