12,385 research outputs found

    Median problems in networks

    Get PDF
    The P-median problem is a classical location model “par excellence”. In this paper we, first examine the early origins of the problem, formulated independently by Louis Hakimi and Charles ReVelle, two of the fathers of the burgeoning multidisciplinary field of research known today as Facility Location Theory and Modelling. We then examine some of the traditional heuristic and exact methods developed to solve the problem. In the third section we analyze the impact of the model in the field. We end the paper by proposing new lines of research related to such a classical problem.P-median, location modelling

    Solving a "Hard" Problem to Approximate an "Easy" One: Heuristics for Maximum Matchings and Maximum Traveling Salesman Problems

    Get PDF
    We consider geometric instances of the Maximum Weighted Matching Problem (MWMP) and the Maximum Traveling Salesman Problem (MTSP) with up to 3,000,000 vertices. Making use of a geometric duality relationship between MWMP, MTSP, and the Fermat-Weber-Problem (FWP), we develop a heuristic approach that yields in near-linear time solutions as well as upper bounds. Using various computational tools, we get solutions within considerably less than 1% of the optimum. An interesting feature of our approach is that, even though an FWP is hard to compute in theory and Edmonds' algorithm for maximum weighted matching yields a polynomial solution for the MWMP, the practical behavior is just the opposite, and we can solve the FWP with high accuracy in order to find a good heuristic solution for the MWMP.Comment: 20 pages, 14 figures, Latex, to appear in Journal of Experimental Algorithms, 200

    Adiabatic Quantum Computing for Random Satisfiability Problems

    Full text link
    The discrete formulation of adiabatic quantum computing is compared with other search methods, classical and quantum, for random satisfiability (SAT) problems. With the number of steps growing only as the cube of the number of variables, the adiabatic method gives solution probabilities close to 1 for problem sizes feasible to evaluate via simulation on current computers. However, for these sizes the minimum energy gaps of most instances are fairly large, so the good performance scaling seen for small problems may not reflect asymptotic behavior where costs are dominated by tiny gaps. Moreover, the resulting search costs are much higher than for other methods. Variants of the quantum algorithm that do not match the adiabatic limit give lower costs, on average, and slower growth than the conventional GSAT heuristic method.Comment: added discussion of discrete adiabatic method, and simulations with 30 bits 8 pages, 8 figure

    Confidence driven TGV fusion

    Full text link
    We introduce a novel model for spatially varying variational data fusion, driven by point-wise confidence values. The proposed model allows for the joint estimation of the data and the confidence values based on the spatial coherence of the data. We discuss the main properties of the introduced model as well as suitable algorithms for estimating the solution of the corresponding biconvex minimization problem and their convergence. The performance of the proposed model is evaluated considering the problem of depth image fusion by using both synthetic and real data from publicly available datasets

    Curse of dimensionality reduction in max-plus based approximation methods: theoretical estimates and improved pruning algorithms

    Full text link
    Max-plus based methods have been recently developed to approximate the value function of possibly high dimensional optimal control problems. A critical step of these methods consists in approximating a function by a supremum of a small number of functions (max-plus "basis functions") taken from a prescribed dictionary. We study several variants of this approximation problem, which we show to be continuous versions of the facility location and kk-center combinatorial optimization problems, in which the connection costs arise from a Bregman distance. We give theoretical error estimates, quantifying the number of basis functions needed to reach a prescribed accuracy. We derive from our approach a refinement of the curse of dimensionality free method introduced previously by McEneaney, with a higher accuracy for a comparable computational cost.Comment: 8pages 5 figure
    • …
    corecore