25,849 research outputs found

    Time Evolution of an Infinite Projected Entangled Pair State: an Efficient Algorithm

    Get PDF
    An infinite projected entangled pair state (iPEPS) is a tensor network ansatz to represent a quantum state on an infinite 2D lattice whose accuracy is controlled by the bond dimension DD. Its real, Lindbladian or imaginary time evolution can be split into small time steps. Every time step generates a new iPEPS with an enlarged bond dimension D′>DD' > D, which is approximated by an iPEPS with the original DD. In Phys. Rev. B 98, 045110 (2018) an algorithm was introduced to optimize the approximate iPEPS by maximizing directly its fidelity to the one with the enlarged bond dimension D′D'. In this work we implement a more efficient optimization employing a local estimator of the fidelity. For imaginary time evolution of a thermal state's purification, we also consider using unitary disentangling gates acting on ancillas to reduce the required DD. We test the algorithm simulating Lindbladian evolution and unitary evolution after a sudden quench of transverse field hxh_x in the 2D quantum Ising model. Furthermore, we simulate thermal states of this model and estimate the critical temperature with good accuracy: 0.1%0.1\% for hx=2.5h_x=2.5 and 0.5%0.5\% for the more challenging case of hx=2.9h_x=2.9 close to the quantum critical point at hx=3.04438(2)h_x=3.04438(2).Comment: published version, presentation improve

    Population Synthesis via k-Nearest Neighbor Crossover Kernel

    Full text link
    The recent development of multi-agent simulations brings about a need for population synthesis. It is a task of reconstructing the entire population from a sampling survey of limited size (1% or so), supplying the initial conditions from which simulations begin. This paper presents a new kernel density estimator for this task. Our method is an analogue of the classical Breiman-Meisel-Purcell estimator, but employs novel techniques that harness the huge degree of freedom which is required to model high-dimensional nonlinearly correlated datasets: the crossover kernel, the k-nearest neighbor restriction of the kernel construction set and the bagging of kernels. The performance as a statistical estimator is examined through real and synthetic datasets. We provide an "optimization-free" parameter selection rule for our method, a theory of how our method works and a computational cost analysis. To demonstrate the usefulness as a population synthesizer, our method is applied to a household synthesis task for an urban micro-simulator.Comment: 10 pages, 4 figures, IEEE International Conference on Data Mining (ICDM) 201

    A Latent Source Model for Patch-Based Image Segmentation

    Full text link
    Despite the popularity and empirical success of patch-based nearest-neighbor and weighted majority voting approaches to medical image segmentation, there has been no theoretical development on when, why, and how well these nonparametric methods work. We bridge this gap by providing a theoretical performance guarantee for nearest-neighbor and weighted majority voting segmentation under a new probabilistic model for patch-based image segmentation. Our analysis relies on a new local property for how similar nearby patches are, and fuses existing lines of work on modeling natural imagery patches and theory for nonparametric classification. We use the model to derive a new patch-based segmentation algorithm that iterates between inferring local label patches and merging these local segmentations to produce a globally consistent image segmentation. Many existing patch-based algorithms arise as special cases of the new algorithm.Comment: International Conference on Medical Image Computing and Computer Assisted Interventions 201

    The random link approximation for the Euclidean traveling salesman problem

    Full text link
    The traveling salesman problem (TSP) consists of finding the length of the shortest closed tour visiting N ``cities''. We consider the Euclidean TSP where the cities are distributed randomly and independently in a d-dimensional unit hypercube. Working with periodic boundary conditions and inspired by a remarkable universality in the kth nearest neighbor distribution, we find for the average optimum tour length = beta_E(d) N^{1-1/d} [1+O(1/N)] with beta_E(2) = 0.7120 +- 0.0002 and beta_E(3) = 0.6979 +- 0.0002. We then derive analytical predictions for these quantities using the random link approximation, where the lengths between cities are taken as independent random variables. From the ``cavity'' equations developed by Krauth, Mezard and Parisi, we calculate the associated random link values beta_RL(d). For d=1,2,3, numerical results show that the random link approximation is a good one, with a discrepancy of less than 2.1% between beta_E(d) and beta_RL(d). For large d, we argue that the approximation is exact up to O(1/d^2) and give a conjecture for beta_E(d), in terms of a power series in 1/d, specifying both leading and subleading coefficients.Comment: 29 pages, 6 figures; formatting and typos correcte

    A note on the evaluation of generative models

    Full text link
    Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided
    • …
    corecore