12,315 research outputs found

    Recursive Branching Simulated Annealing Algorithm

    Get PDF
    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point

    A non-revisiting simulated annealing algorithm

    Get PDF
    In this article, a non-revisiting simulated annealing algorithm (NrSA) is proposed. NrSA is an integration of the non-revisiting scheme and standard simulated annealing (SA). It guarantees that every generated neighbor must not be visited before. This property leads to reduction on the computation cost on evaluating time consuming and expensive objective functions such as surface registration, optimized design and energy management of heating, ventilating and air conditioning systems. Meanwhile, the prevention on function re-evaluation also speeds up the convergence. Furthermore, due to the nature of the non-revisiting scheme, the returned non-revisited solutions from the scheme can be treated as self-adaptive solutions, such that no parametric neighbor picking scheme is involved in NrSA. Thus NrSA can be identified as a parameter-less SA. The simulation results show that NrSA is superior to adaptive SA (ASA) on both uni-modal and multi-modal functions with dimension up to 40. We also illustrate that the overhead and archive size of NrSA are insignificant, so it is practical for real world applications. © 2008 IEEE.published_or_final_versio

    Convergence of the simulated annealing algorithm

    Get PDF
    Caption title. "August 1988."Includes bibliographical references.Work supported by the Army Research Office. DAAL03-86-K-0171B. Delyon

    Hypocoercivity in metastable settings and kinetic simulated annealing

    Full text link
    Combining classical arguments for the analysis of the simulated annealing algorithm with the more recent hypocoercive method of distorted entropy, we prove the convergence for large time of the kinetic Langevin annealing with logarithmic cooling schedule

    An improved simulated annealing algorithm for standard cell placement

    Get PDF
    Simulated annealing is a general purpose Monte Carlo optimization technique that was applied to the problem of placing standard logic cells in a VLSI ship so that the total interconnection wire length is minimized. An improved standard cell placement algorithm that takes advantage of the performance enhancements that appear to come from parallelizing the uniprocessor simulated annealing algorithm is presented. An outline of this algorithm is given

    Simulated Annealing for Topological Solitons

    Get PDF
    The search for solutions of field theories allowing for topological solitons requires that we find the field configuration with the lowest energy in a given sector of topological charge. The standard approach is based on the numerical solution of the static Euler-Lagrange differential equation following from the field energy. As an alternative, we propose to use a simulated annealing algorithm to minimize the energy functional directly. We have applied simulated annealing to several nonlinear classical field theories: the sine-Gordon model in one dimension, the baby Skyrme model in two dimensions and the nuclear Skyrme model in three dimensions. We describe in detail the implementation of the simulated annealing algorithm, present our results and get independent confirmation of the studies which have used standard minimization techniques.Comment: 31 pages, LaTeX, better quality pics at http://www.phy.umist.ac.uk/~weidig/Simulated_Annealing/, updated for publicatio

    The empirical analysis on the portfolio optimization’s effective border

    Get PDF
    This paper discusses the portfolio optimization problem in an effective boundaries were drawn based on the efficient frontier portfolios which is analyzed using mean-variance model empirical method and simulated annealing algorithm. The conclusions show that drawing the efficient frontier based on simulated annealing algorithm is relatively accurate

    A Simulated Annealing Method to Cover Dynamic Load Balancing in Grid Environment

    Get PDF
    High-performance scheduling is critical to the achievement of application performance on the computational grid. New scheduling algorithms are in demand for addressing new concerns arising in the grid environment. One of the main phases of scheduling on a grid is related to the load balancing problem therefore having a high-performance method to deal with the load balancing problem is essential to obtain a satisfactory high-performance scheduling. This paper presents SAGE, a new high-performance method to cover the dynamic load balancing problem by means of a simulated annealing algorithm. Even though this problem has been addressed with several different approaches only one of these methods is related with simulated annealing algorithm. Preliminary results show that SAGE not only makes it possible to find a good solution to the problem (effectiveness) but also in a reasonable amount of time (efficiency)
    • …
    corecore