13,168 research outputs found

    System calibration method for Fourier ptychographic microscopy

    Full text link
    Fourier ptychographic microscopy (FPM) is a recently proposed quantitative phase imaging technique with high resolution and wide field-of-view (FOV). In current FPM imaging platforms, systematic error sources come from the aberrations, LED intensity fluctuation, parameter imperfections and noise, which will severely corrupt the reconstruction results with artifacts. Although these problems have been researched and some special methods have been proposed respectively, there is no method to solve all of them. However, the systematic error is a mixture of various sources in the real situation. It is difficult to distinguish a kind of error source from another due to the similar artifacts. To this end, we report a system calibration procedure, termed SC-FPM, based on the simulated annealing (SA) algorithm, LED intensity correction and adaptive step-size strategy, which involves the evaluation of an error matric at each iteration step, followed by the re-estimation of accurate parameters. The great performance has been achieved both in simulation and experiments. The reported system calibration scheme improves the robustness of FPM and relaxes the experiment conditions, which makes the FPM more pragmatic.Comment: 18 pages, 9 figure

    Population annealing: Theory and application in spin glasses

    Get PDF
    Population annealing is an efficient sequential Monte Carlo algorithm for simulating equilibrium states of systems with rough free energy landscapes. The theory of population annealing is presented, and systematic and statistical errors are discussed. The behavior of the algorithm is studied in the context of large-scale simulations of the three-dimensional Ising spin glass and the performance of the algorithm is compared to parallel tempering. It is found that the two algorithms are similar in efficiency though with different strengths and weaknesses.Comment: 16 pages, 10 figures, 4 table

    Particle algorithms for optimization on binary spaces

    Full text link
    We discuss a unified approach to stochastic optimization of pseudo-Boolean objective functions based on particle methods, including the cross-entropy method and simulated annealing as special cases. We point out the need for auxiliary sampling distributions, that is parametric families on binary spaces, which are able to reproduce complex dependency structures, and illustrate their usefulness in our numerical experiments. We provide numerical evidence that particle-driven optimization algorithms based on parametric families yield superior results on strongly multi-modal optimization problems while local search heuristics outperform them on easier problems

    Una comparación de algoritmos basados en trayectoria granular para el problema de localización y ruteo con flota heterogénea (LRPH)

    Get PDF
    Indexación: Scopus.We consider the Location-Routing Problem with Heterogeneous Fleet (LRPH) in which the goal is to determine the depots to be opened, the customers to be assigned to each open depot, and the corresponding routes fulfilling the demand of the customers and by considering a heterogeneous fleet. We propose a comparison of granular approaches of Simulated Annealing (GSA), of Variable Neighborhood Search (GVNS) and of a probabilistic Tabu Search (pGTS) for the LRPH. Thus, the proposed approaches consider a subset of the search space in which non-favorable movements are discarded regarding a granularity factor. The proposed algorithms are experimentally compared for the solution of the LRPH, by taking into account the CPU time and the quality of the solutions obtained on the instances adapted from the literature. The computational results show that algorithm GSA is able to obtain high quality solutions within short CPU times, improving the results obtained by the other proposed approaches.https://revistas.unal.edu.co/index.php/dyna/article/view/55533/5896

    Efficiency of quantum versus classical annealing in non-convex learning problems

    Full text link
    Quantum annealers aim at solving non-convex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists in designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of non-convex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes is dominated by local minima that cause exponential slow down of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions.Comment: 31 pages, 10 figure
    corecore