1,205 research outputs found

    Exact algorithms for the order picking problem

    Full text link
    Order picking is the problem of collecting a set of products in a warehouse in a minimum amount of time. It is currently a major bottleneck in supply-chain because of its cost in time and labor force. This article presents two exact and effective algorithms for this problem. Firstly, a sparse formulation in mixed-integer programming is strengthened by preprocessing and valid inequalities. Secondly, a dynamic programming approach generalizing known algorithms for two or three cross-aisles is proposed and evaluated experimentally. Performances of these algorithms are reported and compared with the Traveling Salesman Problem (TSP) solver Concorde

    A domination algorithm for {0,1}\{0,1\}-instances of the travelling salesman problem

    Get PDF
    We present an approximation algorithm for {0,1}\{0,1\}-instances of the travelling salesman problem which performs well with respect to combinatorial dominance. More precisely, we give a polynomial-time algorithm which has domination ratio 1−n−1/291-n^{-1/29}. In other words, given a {0,1}\{0,1\}-edge-weighting of the complete graph KnK_n on nn vertices, our algorithm outputs a Hamilton cycle H∗H^* of KnK_n with the following property: the proportion of Hamilton cycles of KnK_n whose weight is smaller than that of H∗H^* is at most n−1/29n^{-1/29}. Our analysis is based on a martingale approach. Previously, the best result in this direction was a polynomial-time algorithm with domination ratio 1/2−o(1)1/2-o(1) for arbitrary edge-weights. We also prove a hardness result showing that, if the Exponential Time Hypothesis holds, there exists a constant CC such that n−1/29n^{-1/29} cannot be replaced by exp⁡(−(log⁡n)C)\exp(-(\log n)^C) in the result above.Comment: 29 pages (final version to appear in Random Structures and Algorithms

    Reordering Rows for Better Compression: Beyond the Lexicographic Order

    Get PDF
    Sorting database tables before compressing them improves the compression rate. Can we do better than the lexicographical order? For minimizing the number of runs in a run-length encoding compression scheme, the best approaches to row-ordering are derived from traveling salesman heuristics, although there is a significant trade-off between running time and compression. A new heuristic, Multiple Lists, which is a variant on Nearest Neighbor that trades off compression for a major running-time speedup, is a good option for very large tables. However, for some compression schemes, it is more important to generate long runs rather than few runs. For this case, another novel heuristic, Vortex, is promising. We find that we can improve run-length encoding up to a factor of 3 whereas we can improve prefix coding by up to 80%: these gains are on top of the gains due to lexicographically sorting the table. We prove that the new row reordering is optimal (within 10%) at minimizing the runs of identical values within columns, in a few cases.Comment: to appear in ACM TOD

    Improving the Asymmetric TSP by Considering Graph Structure

    Get PDF
    Recent works on cost based relaxations have improved Constraint Programming (CP) models for the Traveling Salesman Problem (TSP). We provide a short survey over solving asymmetric TSP with CP. Then, we suggest new implied propagators based on general graph properties. We experimentally show that such implied propagators bring robustness to pathological instances and highlight the fact that graph structure can significantly improve search heuristics behavior. Finally, we show that our approach outperforms current state of the art results.Comment: Technical repor

    Deep Policy Dynamic Programming for Vehicle Routing Problems

    Get PDF
    Routing problems are a class of combinatorial problems with many practical applications. Recently, end-to-end deep learning methods have been proposed to learn approximate solution heuristics for such problems. In contrast, classical dynamic programming (DP) algorithms guarantee optimal solutions, but scale badly with the problem size. We propose Deep Policy Dynamic Programming (DPDP), which aims to combine the strengths of learned neural heuristics with those of DP algorithms. DPDP prioritizes and restricts the DP state space using a policy derived from a deep neural network, which is trained to predict edges from example solutions. We evaluate our framework on the travelling salesman problem (TSP), the vehicle routing problem (VRP) and TSP with time windows (TSPTW) and show that the neural policy improves the performance of (restricted) DP algorithms, making them competitive to strong alternatives such as LKH, while also outperforming most other 'neural approaches' for solving TSPs, VRPs and TSPTWs with 100 nodes.Comment: 21 page

    Cover-Encodings of Fitness Landscapes

    Full text link
    The traditional way of tackling discrete optimization problems is by using local search on suitably defined cost or fitness landscapes. Such approaches are however limited by the slowing down that occurs when the local minima that are a feature of the typically rugged landscapes encountered arrest the progress of the search process. Another way of tackling optimization problems is by the use of heuristic approximations to estimate a global cost minimum. Here we present a combination of these two approaches by using cover-encoding maps which map processes from a larger search space to subsets of the original search space. The key idea is to construct cover-encoding maps with the help of suitable heuristics that single out near-optimal solutions and result in landscapes on the larger search space that no longer exhibit trapping local minima. We present cover-encoding maps for the problems of the traveling salesman, number partitioning, maximum matching and maximum clique; the practical feasibility of our method is demonstrated by simulations of adaptive walks on the corresponding encoded landscapes which find the global minima for these problems.Comment: 15 pages, 4 figure

    Sparse experimental design : an effective an efficient way discovering better genetic algorithm structures

    Get PDF
    The focus of this paper is the demonstration that sparse experimental design is a useful strategy for developing Genetic Algorithms. It is increasingly apparent from a number of reports and papers within a variety of different problem domains that the 'best' structure for a GA may be dependent upon the application. The GA structure is defined as both the types of operators and the parameters settings used during operation. The differences observed may be linked to the nature of the problem, the type of fitness function, or the depth or breadth of the problem under investigation. This paper demonstrates that advanced experimental design may be adopted to increase the understanding of the relationships between the GA structure and the problem domain, facilitating the selection of improved structures with a minimum of effort
    • 

    corecore