334,388 research outputs found

    Cosolver2B: An Efficient Local Search Heuristic for the Travelling Thief Problem

    Full text link
    Real-world problems are very difficult to optimize. However, many researchers have been solving benchmark problems that have been extensively investigated for the last decades even if they have very few direct applications. The Traveling Thief Problem (TTP) is a NP-hard optimization problem that aims to provide a more realistic model. TTP targets particularly routing problem under packing/loading constraints which can be found in supply chain management and transportation. In this paper, TTP is presented and formulated mathematically. A combined local search algorithm is proposed and compared with Random Local Search (RLS) and Evolutionary Algorithm (EA). The obtained results are quite promising since new better solutions were found.Comment: 12th ACS/IEEE International Conference on Computer Systems and Applications (AICCSA) 2015. November 17-20, 201

    Neural Architecture Search using Deep Neural Networks and Monte Carlo Tree Search

    Full text link
    Neural Architecture Search (NAS) has shown great success in automating the design of neural networks, but the prohibitive amount of computations behind current NAS methods requires further investigations in improving the sample efficiency and the network evaluation cost to get better results in a shorter time. In this paper, we present a novel scalable Monte Carlo Tree Search (MCTS) based NAS agent, named AlphaX, to tackle these two aspects. AlphaX improves the search efficiency by adaptively balancing the exploration and exploitation at the state level, and by a Meta-Deep Neural Network (DNN) to predict network accuracies for biasing the search toward a promising region. To amortize the network evaluation cost, AlphaX accelerates MCTS rollouts with a distributed design and reduces the number of epochs in evaluating a network by transfer learning, which is guided with the tree structure in MCTS. In 12 GPU days and 1000 samples, AlphaX found an architecture that reaches 97.84\% top-1 accuracy on CIFAR-10, and 75.5\% top-1 accuracy on ImageNet, exceeding SOTA NAS methods in both the accuracy and sampling efficiency. Particularly, we also evaluate AlphaX on NASBench-101, a large scale NAS dataset; AlphaX is 3x and 2.8x more sample efficient than Random Search and Regularized Evolution in finding the global optimum. Finally, we show the searched architecture improves a variety of vision applications from Neural Style Transfer, to Image Captioning and Object Detection.Comment: To appear in the Thirty-Fourth AAAI conference on Artificial Intelligence (AAAI-2020

    An Efficient Local Search for Partial Latin Square Extension Problem

    Full text link
    A partial Latin square (PLS) is a partial assignment of n symbols to an nxn grid such that, in each row and in each column, each symbol appears at most once. The partial Latin square extension problem is an NP-hard problem that asks for a largest extension of a given PLS. In this paper we propose an efficient local search for this problem. We focus on the local search such that the neighborhood is defined by (p,q)-swap, i.e., removing exactly p symbols and then assigning symbols to at most q empty cells. For p in {1,2,3}, our neighborhood search algorithm finds an improved solution or concludes that no such solution exists in O(n^{p+1}) time. We also propose a novel swap operation, Trellis-swap, which is a generalization of (1,q)-swap and (2,q)-swap. Our Trellis-neighborhood search algorithm takes O(n^{3.5}) time to do the same thing. Using these neighborhood search algorithms, we design a prototype iterated local search algorithm and show its effectiveness in comparison with state-of-the-art optimization solvers such as IBM ILOG CPLEX and LocalSolver.Comment: 17 pages, 2 figure

    Genetic Optimization Using Derivatives: The rgenoud Package for R

    Get PDF
    genoud is an R function that combines evolutionary algorithm methods with a derivative-based (quasi-Newton) method to solve difficult optimization problems. genoud may also be used for optimization problems for which derivatives do not exist. genoud solves problems that are nonlinear or perhaps even discontinuous in the parameters of the function to be optimized. When the function to be optimized (for example, a log-likelihood) is nonlinear in the model's parameters, the function will generally not be globally concave and may have irregularities such as saddlepoints or discontinuities. Optimization methods that rely on derivatives of the objective function may be unable to find any optimum at all. Multiple local optima may exist, so that there is no guarantee that a derivative-based method will converge to the global optimum. On the other hand, algorithms that do not use derivative information (such as pure genetic algorithms) are for many problems needlessly poor at local hill climbing. Most statistical problems are regular in a neighborhood of the solution. Therefore, for some portion of the search space, derivative information is useful. The function supports parallel processing on multiple CPUs on a single machine or a cluster of computers.
    corecore