972 research outputs found

    On the vehicle routing problem with time windows

    Get PDF

    Budget-constrained cut problems

    Full text link
    The minimum and maximum cuts of an undirected edge-weighted graph are classic problems in graph theory. While the Min-Cut Problem can be solved in P, the Max-Cut Problem is NP-Complete. Exact and heuristic methods have been developed for solving them. For both problems, we introduce a natural extension in which cutting an edge induces a cost. Our goal is to find a cut that minimizes the sum of the cut weights but, at the same time, restricts its total cut cost to a given budget. We prove that both restricted problems are NPComplete and we also study some of its properties. Finally, we develop exact algorithms to solve both as well as a non-exact algorithm for the min-cut case based on a Lagreangean relaxation that generally provides optimal solutions. Their performance is reported by an extensive computational experience.Comment: 21 pages, 6 figures, 11 table

    Path Optimization for the Resource-Constrained Searcher

    Get PDF
    Naval Research LogisticsWe formulate and solve a discrete-time path-optimization problem where a single searcher, operating in a discretized 3-dimensional airspace, looks for a moving target in a finite set of cells. The searcher is constrained by maximum limits on the consumption of several resources such as time, fuel, and risk along any path. We develop a special- ized branch-and-bound algorithm for this problem that utilizes several network reduction procedures as well as a new bounding technique based on Lagrangian relaxation and net- work expansion. The resulting algorithm outperforms a state-of-the-art algorithm for solving time-constrained problems and also is the first algorithm to solve multi-constrained problems

    A Local Search Modeling for Constrained Optimum Paths Problems (Extended Abstract)

    Full text link
    Constrained Optimum Path (COP) problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. We also apply the modeling to the edge-disjoint paths problem (EDP). We show that side constraints can easily be added in the model. Computational results show the significance of the approach

    Optimisation of large scale network problems

    Get PDF
    The Constrained Shortest Path Problem (CSPP) consists of finding the shortest path in a graph or network that satisfies one or more resource constraints. Without these constraints, the shortest path problem can be solved in polynomial time; with them, the CSPP is NP-hard and thus far no polynomial-time algorithms exist for solving it optimally. The problem arises in a number of practical situations. In the case of vehicle path planning, the vehicle may be an aircraft flying through a region with obstacles such as mountains or radar detectors, with an upper bound on the fuel consumption, the travel time or the risk of attack. The vehicle may be a submarine travelling through a region with sonar detectors, with a time or risk budget. These problems all involve a network which is a discrete model of the physical domain. Another example would be the routing of voice and data information in a communications network such as a mobile phone network, where the constraints may include maximum call delays or relay node capacities. This is a problem of current economic importance, and one for which time-sensitive solutions are not always available, especially if the networks are large. We consider the simplest form of the problem, large grid networks with a single side constraint, which have been studied in the literature. This thesis explores the application of Constraint Programming combined with Lagrange Relaxation to achieve optimal or near-optimal solutions of the CSPP. The following is a brief outline of the contribution of this thesis. Lagrange Relaxation may or may not achieve optimal or near-optimal results on its own. Often, large duality gaps are present. We make a simple modification to Dijkstra’s algorithm that does not involve any additional computational work in order to generate an estimate of path time at every node.We then use this information to constrain the network along a bisecting meridian. The combination of Lagrange Relaxation (LR) and a heuristic for filtering along the meridian provide an aggressive method for finding near-optimal solutions in a short time. Two network problems are studied in this work. The first is a Submarine Transit Path problem in which the transit field contains four sonar detectors at known locations, each with the same detection profile. The side constraint is the total transit time, with the submarine capable of 2 speeds. For the single-speed case, the initial LR duality gap may be as high as 30%. The first hybrid method uses a single centre meridian to constrain the network based on the unused time resource, and is able to produce solutions that are generally within 1% of optimal and always below 3%. Using the computation time for the initial Lagrange Relaxation as a baseline, the average computation time for the first hybrid method is about 30% to 50% higher, and the worst case CPU times are 2 to 4 times higher. The second problem is a random valued network from the literature. Edge costs, times, and lengths are uniform, randomly generated integers in a given range. Since the values given in the literature problems do not yield problems with a high duality gap, the values are varied and from a population of approximately 100,000 problems only the worst 200 from each set are chosen for study. These problems have an initial LR duality gap as high as 40%. A second hybrid method is developed, using values for the unused time resource and the lower bound values computed by Dijkstra’s algorithm as part of the LR method. The computed values are then used to position multiple constraining meridians in order to allow LR to find better solutions.This second hybrid method is able to produce solutions that are generally within 0.1% of optimal, with computation times that are on average 2 times the initial Lagrange Relaxation time, and in the worst case only about 5 times higher. The best method for solving the Constrained Shortest Path Problem reported in the literature thus far is the LRE-A method of Carlyle et al. (2007), which uses Lagrange Relaxation for preprocessing followed by a bounded search using aggregate constraints. We replace Lagrange Relaxation with the second hybrid method and show that optimal solutions are produced for both network problems with computation times that are between one and two orders of magnitude faster than LRE-A. In addition, these hybrid methods combined with the bounded search are up to 2 orders of magnitude faster than the commercial CPlex package using a straightforward MILP formulation of the problem. Finally, the second hybrid method is used as a preprocessing step on both network problems, prior to running CPlex. This preprocessing reduces the network size sufficiently to allow CPlex to solve all cases to optimality up to 3 orders of magnitude faster than without this preprocessing, and up to an order of magnitude faster than using Lagrange Relaxation for preprocessing. Chapter 1 provides a review of the thesis and some terminology used. Chapter 2 reviews previous approaches to the CSPP, in particular the two current best methods. Chapter 3 applies Lagrange Relaxation to the Submarine Transit Path problem with 2 speeds, to provide a baseline for comparison. The problem is reduced to a single speed, which demonstrates the large duality gap problem possible with Lagrange Relaxation, and the first hybrid method is introduced.Chapter 4 examines a grid network problem using randomly generated edge costs and weights, and introduces the second hybrid method. Chapter 5 then applies the second hybrid method to both network problems as a preprocessing step, using both CPlex and a bounded search method from the literature to solve to optimality. The conclusion of this thesis and directions for future work are discussed in Chapter 6

    Higher-Order Regularization in Computer Vision

    Get PDF
    At the core of many computer vision models lies the minimization of an objective function consisting of a sum of functions with few arguments. The order of the objective function is defined as the highest number of arguments of any summand. To reduce ambiguity and noise in the solution, regularization terms are included into the objective function, enforcing different properties of the solution. The most commonly used regularization is penalization of boundary length, which requires a second-order objective function. Most of this thesis is devoted to introducing higher-order regularization terms and presenting efficient minimization schemes. One of the topics of the thesis covers a reformulation of a large class of discrete functions into an equivalent form. The reformulation is shown, both in theory and practical experiments, to be advantageous for higher-order regularization models based on curvature and second-order derivatives. Another topic is the parametric max-flow problem. An analysis is given, showing its inherent limitations for large-scale problems which are common in computer vision. The thesis also introduces a segmentation approach for finding thin and elongated structures in 3D volumes. Using a line-graph formulation, it is shown how to efficiently regularize with respect to higher-order differential geometric properties such as curvature and torsion. Furthermore, an efficient optimization approach for a multi-region model is presented which, in addition to standard regularization, is able to enforce geometric constraints such as inclusion or exclusion of different regions. The final part of the thesis deals with dense stereo estimation. A new regularization model is introduced, penalizing the second-order derivatives of a depth or disparity map. Compared to previous second-order approaches to dense stereo estimation, the new regularization model is shown to be more easily optimized
    • …
    corecore