33,770 research outputs found

    Open system quantum annealing in mean field models with exponential degeneracy

    Full text link
    Real life quantum computers are inevitably affected by intrinsic noise resulting in dissipative non-unitary dynamics realized by these devices. We consider an open system quantum annealing algorithm optimized for a realistic analog quantum device which takes advantage of noise-induced thermalization and relies on incoherent quantum tunneling at finite temperature. We analyze the performance of this algorithm considering a p-spin model which allows for a mean field quasicalssical solution and at the same time demonstrates the 1st order phase transition and exponential degeneracy of states. We demonstrate that finite temperature effects introduced by the noise are particularly important for the dynamics in presence of the exponential degeneracy of metastable states. We determine the optimal regime of the open system quantum annealing algorithm for this model and find that it can outperform simulated annealing in a range of parameters.Comment: 11 pages, 5 figure

    A decision support methodology for process in the loop optimisation

    Get PDF
    Experimental optimisation with hardware-in-the-loop is a common procedure in engineering, particularly in cases where accurate modelling is not possible. A common methodology to support experimental search is to use one of the many gradient descent methods. However, even sophisticated and proven methodologies such as Simulated Annealing (SA) can be significantly challenged in the presence of significant noise. This paper introduces a decision support methodology based upon Response Surfaces (RS), which supplements experimental management based on variable neighbourhood search, and is shown to be highly effective in directing experiments in the presence of significant signal to noise (S-N) ratio and complex combinatorial functions. The methodology is developed on a 3-dimensional surface with multiple local-minima and large basin of attraction, and high S-N ratio. Finally, the method is applied to a real-life automotive experimental application

    Two simulated annealing optimization schemas for rational bézier curve fitting in the presence of noise

    Get PDF
    Fitting curves to noisy data points is a difficult problem arising in many scientific and industrial domains. Although polynomial functions are usually applied to this task, there are many shapes that cannot be properly fitted by using this approach. In this paper, we tackle this issue by using rational BĂ©zier curves. This is a very difficult problem that requires computing four different sets of unknowns (data parameters, poles, weights, and the curve degree) strongly related to each other in a highly nonlinear way. This leads to a difficult continuous nonlinear optimization problem. In this paper, we propose two simulated annealing schemas (the all-in-one schema and the sequential schema) to determine the data parameterization and the weights of the poles of the fitting curve. These schemas are combined with least-squares minimization and the Bayesian Information Criterion to calculate the poles and the optimal degree of the best fitting BĂ©zier rational curve, respectively. We apply our methods to a benchmark of three carefully chosen examples of 2D and 3D noisy data points. Our experimental results show that this methodology (particularly, the sequential schema) outperforms previous polynomial-based approaches for our data fitting problem, even in the presence of noise of low-medium intensity.This research has been kindly supported by the Computer Science National Program of the Spanish Ministry of Economy and Competitiveness, Project Ref. #TIN2012-30768, Toho University (Funabashi, Japan), and the University of Cantabria (Santander, Spain)

    Estimation of Noisy Cost Functions by Conventional and Adjusted Simulated Annealing Techniques

    Full text link
    L'algorithme de recuit simulé est largement utilisé dans la communauté d'optimisation pour résoudre divers types de problèmes, discrets et continus. L'objectif de cette thèse est d'analyser le recuit simulé dans des environnements déterministes et stochastiques pour des problèmes discrets. Les objectifs précis sont de classer des problèmes clés, d'offrir des suggestions et des recommandations à suivre en utilisant l'algorithme de recuit simulé et de recuit simulé sous bruit. Plus spécifiquement, des problèmes apparaissent en optimisation en présence de bruit, et sur la manière de le contrôler. Nous proposons la méthode de recuit simulé bruité (NSA: Noisy Simulated Annealing), basée sur la modification de l'algorithme de Metropolis-Hastings présentée par Ceperlay and Dewing, qui surpasse les techniques de recuit simulé analogues, délivrant des solutions numériques similaires, à coût réduit. Nous considérons les principales approches qui traitent le bruit dans le cadre du recuit simulé afin d'en extraire leurs attributs distinctifs et de produire une comparaison plus pertinente. Nous évaluons ensuite les performances numériques de l'approche sur des instances du problème du voyageur de commerce. Les résultats obtenus montrent un clair avantage pour le recuit simulé bruité, en présence de bruit.The Simulated Annealing (SA) algorithm is extensively used in the optimization community for solving various kinds of problems, discrete and continuous. This thesis aims to analyze SA in both deterministic and stochastic environments for discrete problems. Precise objectives are to classify key problems, offer suggestions and recommendations to be undertaken by using SA and Simulated Annealing Under Noise (SAUN). More specifically, problems appear in optimization due to the existence of noise when evaluating the objective function, and how to control this noise. We propose a method, called Noisy Simulated Annealing (NSA), based on the Metropolis-Hasting algorithm modification presented by Ceperlay and Dewing, that outperforms analogous SA techniques, delivering similar numerical solutions, at a reduced cost. We consider the main approaches in the SA setting that handle noise in order to extract their distinctive attributes and make the comparison more relevant. We next assess the numerical performance of the approach on traveling salesman problem instances. The outcomes of our tests show a clear advantage for NSA when solving different problems to get high-quality solutions in presence of noise

    Memetic simulated annealing for data approximation with local-support curves

    Get PDF
    This paper introduces a new memetic optimization algorithm called MeSA (Memetic Simulated Annealing) to address the data fitting problem with local-support free-form curves. The proposed method hybridizes simulated annealing with the COBYLA local search optimization method. This approach is further combined with the centripetal parameterization and the Bayesian information criterion to compute all free variables of the curve reconstruction problem with B-splines. The performance of our approach is evaluated by its application to four different shapes with local deformations and different degrees of noise and density of data points. The MeSA method has also been compared to the non-memetic version of SA. Our results show that MeSA is able to reconstruct the underlying shape of data even in the presence of noise and low density point clouds. It also outperforms SA for all the examples in this paper.This work has been supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grants TEC2013-47141-C4-R (RACHEL) and #TIN2012-30768 (Computer Science National Program) and Toho University (Funabashi, Japan)

    Improved decision support for engine-in-the-loop experimental design optimization

    Get PDF
    Experimental optimization with hardware in the loop is a common procedure in engineering and has been the subject of intense development, particularly when it is applied to relatively complex combinatorial systems that are not completely understood, or where accurate modelling is not possible owing to the dimensions of the search space. A common source of difficulty arises because of the level of noise associated with experimental measurements, a combination of limited instrument precision, and extraneous factors. When a series of experiments is conducted to search for a combination of input parameters that results in a minimum or maximum response, under the imposition of noise, the underlying shape of the function being optimized can become very difficult to discern or even lost. A common methodology to support experimental search for optimal or suboptimal values is to use one of the many gradient descent methods. However, even sophisticated and proven methodologies, such as simulated annealing, can be significantly challenged in the presence of noise, since approximating the gradient at any point becomes highly unreliable. Often, experiments are accepted as a result of random noise which should be rejected, and vice versa. This is also true for other sampling techniques, including tabu and evolutionary algorithms. After the general introduction, this paper is divided into two main sections (sections 2 and 3), which are followed by the conclusion. Section 2 introduces a decision support methodology based upon response surfaces, which supplements experimental management based on a variable neighbourhood search and is shown to be highly effective in directing experiments in the presence of a significant signal-to-noise ratio and complex combinatorial functions. The methodology is developed on a three-dimensional surface with multiple local minima, a large basin of attraction, and a high signal-to-noise ratio. In section 2, the methodology is applied to an automotive combinatorial search in the laboratory, on a real-time engine-in-the-loop application. In this application, it is desired to find the maximum power output of an experimental single-cylinder spark ignition engine operating under a quasi-constant-volume operating regime. Under this regime, the piston is slowed at top dead centre to achieve combustion in close to constant volume conditions. As part of the further development of the engine to incorporate a linear generator to investigate free-piston operation, it is necessary to perform a series of experiments with combinatorial parameters. The objective is to identify the maximum power point in the least number of experiments in order to minimize costs. This test programme provides peak power data in order to achieve optimal electrical machine design. The decision support methodology is combined with standard optimization and search methods – namely gradient descent and simulated annealing – in order to study the reductions possible in experimental iterations. It is shown that the decision support methodology significantly reduces the number of experiments necessary to find the maximum power solution and thus offers a potentially significant cost saving to hardware-in-the-loop experi- mentation

    SIMULATED ANNEALING METHOD IN THE CLASSIC BOLTZMANN MACHINES

    Get PDF
    The classical Boltzmann machine is understood as a neural network proposed by Hinton and his colleagues in 1985. They added noise interferences to the Hopfield model and called this network a Boltzmann machine drawing an analogy between its behaviour and physical systems with the presence of interferences. This study explains the definition of “simulated annealing” and “thermal equilibrium” using the example of a partial network. A technique for calculating the probabilities of transition states at different temperatures using Markov chains is described, an example of the application of the SA - travelling salesman problem is given. Boltzmann machine is used for pattern recognition and in classification problems. As a disadvantage, a slow learning algorithm is mentioned, but it makes it possible to get out of local minima. The main purpose of this article is to show the capabilities of the simulated annealing algorithm in solving practical tasks.

    Quantum Optimization of Fully-Connected Spin Glasses

    Full text link
    The Sherrington-Kirkpatrick model with random ±1\pm1 couplings is programmed on the D-Wave Two annealer featuring 509 qubits interacting on a Chimera-type graph. The performance of the optimizer compares and correlates to simulated annealing. When considering the effect of the static noise, which degrades the performance of the annealer, one can estimate an improvement on the comparative scaling of the two methods in favor of the D-Wave machine. The optimal choice of parameters of the embedding on the Chimera graph is shown to be associated to the emergence of the spin-glass critical temperature of the embedded problem.Comment: includes supplemental materia
    • …
    corecore