13,168 research outputs found
System calibration method for Fourier ptychographic microscopy
Fourier ptychographic microscopy (FPM) is a recently proposed quantitative
phase imaging technique with high resolution and wide field-of-view (FOV). In
current FPM imaging platforms, systematic error sources come from the
aberrations, LED intensity fluctuation, parameter imperfections and noise,
which will severely corrupt the reconstruction results with artifacts. Although
these problems have been researched and some special methods have been proposed
respectively, there is no method to solve all of them. However, the systematic
error is a mixture of various sources in the real situation. It is difficult to
distinguish a kind of error source from another due to the similar artifacts.
To this end, we report a system calibration procedure, termed SC-FPM, based on
the simulated annealing (SA) algorithm, LED intensity correction and adaptive
step-size strategy, which involves the evaluation of an error matric at each
iteration step, followed by the re-estimation of accurate parameters. The great
performance has been achieved both in simulation and experiments. The reported
system calibration scheme improves the robustness of FPM and relaxes the
experiment conditions, which makes the FPM more pragmatic.Comment: 18 pages, 9 figure
Population annealing: Theory and application in spin glasses
Population annealing is an efficient sequential Monte Carlo algorithm for
simulating equilibrium states of systems with rough free energy landscapes. The
theory of population annealing is presented, and systematic and statistical
errors are discussed. The behavior of the algorithm is studied in the context
of large-scale simulations of the three-dimensional Ising spin glass and the
performance of the algorithm is compared to parallel tempering. It is found
that the two algorithms are similar in efficiency though with different
strengths and weaknesses.Comment: 16 pages, 10 figures, 4 table
Particle algorithms for optimization on binary spaces
We discuss a unified approach to stochastic optimization of pseudo-Boolean
objective functions based on particle methods, including the cross-entropy
method and simulated annealing as special cases. We point out the need for
auxiliary sampling distributions, that is parametric families on binary spaces,
which are able to reproduce complex dependency structures, and illustrate their
usefulness in our numerical experiments. We provide numerical evidence that
particle-driven optimization algorithms based on parametric families yield
superior results on strongly multi-modal optimization problems while local
search heuristics outperform them on easier problems
Una comparación de algoritmos basados en trayectoria granular para el problema de localización y ruteo con flota heterogénea (LRPH)
Indexación: Scopus.We consider the Location-Routing Problem with Heterogeneous Fleet (LRPH) in which the goal is to determine the depots to be opened, the customers to be assigned to each open depot, and the corresponding routes fulfilling the demand of the customers and by considering a heterogeneous fleet. We propose a comparison of granular approaches of Simulated Annealing (GSA), of Variable Neighborhood Search (GVNS) and of a probabilistic Tabu Search (pGTS) for the LRPH. Thus, the proposed approaches consider a subset of the search space in which non-favorable movements are discarded regarding a granularity factor. The proposed algorithms are experimentally compared for the solution of the LRPH, by taking into account the CPU time and the quality of the solutions obtained on the instances adapted from the literature. The computational results show that algorithm GSA is able to obtain high quality solutions within short CPU times, improving the results obtained by the other proposed approaches.https://revistas.unal.edu.co/index.php/dyna/article/view/55533/5896
Efficiency of quantum versus classical annealing in non-convex learning problems
Quantum annealers aim at solving non-convex optimization problems by
exploiting cooperative tunneling effects to escape local minima. The underlying
idea consists in designing a classical energy function whose ground states are
the sought optimal solutions of the original optimization problem and add a
controllable quantum transverse field to generate tunneling processes. A key
challenge is to identify classes of non-convex optimization problems for which
quantum annealing remains efficient while thermal annealing fails. We show that
this happens for a wide class of problems which are central to machine
learning. Their energy landscapes is dominated by local minima that cause
exponential slow down of classical thermal annealers while simulated quantum
annealing converges efficiently to rare dense regions of optimal solutions.Comment: 31 pages, 10 figure
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Recommended from our members
Computational Methods for Parameter Estimation in Climate Models
Intensive computational methods have been used by Earth scientists in a wide range of problems in data inversion and uncertainty quantification such as earthquake epicenter location and climate projections. To quantify the uncertainties resulting from a range of plausible model configurations it is necessary to estimate a multidimensional probability distribution. The computational cost of estimating these distributions for geoscience applications is impractical using traditional methods such as Metropolis/Gibbs algorithms as simulation costs limit the number of experiments that can be obtained reasonably. Several alternate sampling strategies have been proposed that could improve on the sampling efficiency including Multiple Very Fast Simulated Annealing (MVFSA) and Adaptive Metropolis algorithms. The performance of these proposed sampling strategies are evaluated with a surrogate climate model that is able to approximate the noise and response behavior of a realistic atmospheric general circulation model (AGCM). The surrogate model is fast enough that its evaluation can be embedded in these Monte Carlo algorithms. We show that adaptive methods can be superior to MVFSA to approximate the known posterior distribution with fewer forward evaluations. However the adaptive methods can also be limited by inadequate sample mixing. The Single Component and Delayed Rejection Adaptive Metropolis algorithms were found to resolve these limitations, although challenges remain to approximating multi-modal distributions. The results show that these advanced methods of statistical inference can provide practical solutions to the climate model calibration problem and challenges in quantifying climate projection uncertainties. The computational methods would also be useful to problems outside climate prediction, particularly those where sampling is limited by availability of computational resources.National Science Foundation OCE-0415251CONACyT-Mexico 159764Institute for Geophysic
- …