225,411 research outputs found

    Evaluation of advanced optimisation methods for estimating Mixed Logit models

    No full text
    The performances of different simulation-based estimation techniques for mixed logit modeling are evaluated. A quasi-Monte Carlo method (modified Latin hypercube sampling) is compared with a Monte Carlo algorithm with dynamic accuracy. The classic Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization algorithm line-search approach and trust region methods, which have proved to be extremely powerful in nonlinear programming, are also compared. Numerical tests are performed on two real data sets: stated preference data for parking type collected in the United Kingdom, and revealed preference data for mode choice collected as part of a German travel diary survey. Several criteria are used to evaluate the approximation quality of the log likelihood function and the accuracy of the results and the associated estimation runtime. Results suggest that the trust region approach outperforms the BFGS approach and that Monte Carlo methods remain competitive with quasi-Monte Carlo methods in high-dimensional problems, especially when an adaptive optimization algorithm is used

    A Multi-Layer Line Search Method to Improve the Initialization of Optimization Algorithms (Preprint submitted to Optimization Online)

    Get PDF
    We introduce a novel metaheuristic methodology to improve the initialization of a given deterministic or stochastic optimization algorithm. Our objective is to improve the performance of the considered algorithm, called core optimization algorithm, by reducing its number of cost function evaluations, by increasing its success rate and by boosting the precision of its results. In our approach, the core optimization is considered as a suboptimization problem for a multi-layer line search method. The approach is presented and implemented for various particular core optimization algorithms: Steepest Descent, Heavy-Ball, Genetic Algorithm, Differential Evolution and Controlled Random Search. We validate our methodology by considering a set of low and high dimensional benchmark problems (i.e., problems of dimension between 2 and 1000). The results are compared to those obtained with the core optimization algorithms alone and with two additional global optimization methods (Direct Tabu Search and Continuous Greedy Randomized Adaptive Search). These latter also aim at improving the initial condition for the core algorithms. The numerical results seem to indicate that our approach improves the performances of the core optimization algorithms and allows to generate algorithms more efficient than the other optimization methods studied here. A Matlab optimization package called ”Global Optimization Platform” (GOP), implementing the algorithms presented here, has been developed and can be downloaded at: http://www.mat.ucm.es/momat/software.ht

    A Multi-Layer Line Search Method to Improve the Initialization of Optimization Algorithms

    Get PDF
    International audienceWe introduce a novel metaheuristic methodology to improve the initializationof a given deterministic or stochastic optimization algorithm. Our objectiveis to improve the performance of the considered algorithm, calledcore optimization algorithm, by reducing its number of cost function evaluations,by increasing its success rate and by boosting the precision of itsresults. In our approach, the core optimization is considered as a suboptimizationproblem for a multi-layer line search method. The approachis presented and implemented for various particular core optimization algorithms:Steepest Descent, Heavy-Ball, Genetic Algorithm, Differential Evolutionand Controlled Random Search. We validate our methodology byconsidering a set of low and high dimensional benchmark problems (i.e.,problems of dimension between 2 and 1000). The results are compared tothose obtained with the core optimization algorithms alone and with twoadditional global optimization methods (Direct Tabu Search and ContinuousGreedy Randomized Adaptive Search). These latter also aim at improvingthe initial condition for the core algorithms. The numerical results seemto indicate that our approach improves the performances of the core optimizationalgorithms and allows to generate algorithms more efficient thanthe other optimization methods studied here. A Matlab optimization packagecalled ”Global Optimization Platform” (GOP), implementing the algorithmspresented here, has been developed and can be downloaded at:http://www.mat.ucm.es/momat/software.ht

    Learning to Race through Coordinate Descent Bayesian Optimisation

    Full text link
    In the automation of many kinds of processes, the observable outcome can often be described as the combined effect of an entire sequence of actions, or controls, applied throughout its execution. In these cases, strategies to optimise control policies for individual stages of the process might not be applicable, and instead the whole policy might have to be optimised at once. On the other hand, the cost to evaluate the policy's performance might also be high, being desirable that a solution can be found with as few interactions as possible with the real system. We consider the problem of optimising control policies to allow a robot to complete a given race track within a minimum amount of time. We assume that the robot has no prior information about the track or its own dynamical model, just an initial valid driving example. Localisation is only applied to monitor the robot and to provide an indication of its position along the track's centre axis. We propose a method for finding a policy that minimises the time per lap while keeping the vehicle on the track using a Bayesian optimisation (BO) approach over a reproducing kernel Hilbert space. We apply an algorithm to search more efficiently over high-dimensional policy-parameter spaces with BO, by iterating over each dimension individually, in a sequential coordinate descent-like scheme. Experiments demonstrate the performance of the algorithm against other methods in a simulated car racing environment.Comment: Accepted as conference paper for the 2018 IEEE International Conference on Robotics and Automation (ICRA

    Robots that can adapt like animals

    Get PDF
    As robots leave the controlled environments of factories to autonomously function in more complex, natural environments, they will have to respond to the inevitable fact that they will become damaged. However, while animals can quickly adapt to a wide variety of injuries, current robots cannot "think outside the box" to find a compensatory behavior when damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. Here we introduce an intelligent trial and error algorithm that allows robots to adapt to damage in less than two minutes, without requiring self-diagnosis or pre-specified contingency plans. Before deployment, a robot exploits a novel algorithm to create a detailed map of the space of high-performing behaviors: This map represents the robot's intuitions about what behaviors it can perform and their value. If the robot is damaged, it uses these intuitions to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a compensatory behavior that works in spite of the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new technique will enable more robust, effective, autonomous robots, and suggests principles that animals may use to adapt to injury
    • 

    corecore