18,606 research outputs found

    Strongly Universal Quantum Turing Machines and Invariance of Kolmogorov Complexity

    Get PDF
    We show that there exists a universal quantum Turing machine (UQTM) that can simulate every other QTM until the other QTM has halted and then halt itself with probability one. This extends work by Bernstein and Vazirani who have shown that there is a UQTM that can simulate every other QTM for an arbitrary, but preassigned number of time steps. As a corollary to this result, we give a rigorous proof that quantum Kolmogorov complexity as defined by Berthiaume et al. is invariant, i.e. depends on the choice of the UQTM only up to an additive constant. Our proof is based on a new mathematical framework for QTMs, including a thorough analysis of their halting behaviour. We introduce the notion of mutually orthogonal halting spaces and show that the information encoded in an input qubit string can always be effectively decomposed into a classical and a quantum part.Comment: 18 pages, 1 figure. The operation R is now really a quantum operation (it was not before); corrected some typos, III.B more readable, Conjecture 3.15 is now a theore

    Sliced Wasserstein Distance for Learning Gaussian Mixture Models

    Full text link
    Gaussian mixture models (GMM) are powerful parametric tools with many applications in machine learning and computer vision. Expectation maximization (EM) is the most popular algorithm for estimating the GMM parameters. However, EM guarantees only convergence to a stationary point of the log-likelihood function, which could be arbitrarily worse than the optimal solution. Inspired by the relationship between the negative log-likelihood function and the Kullback-Leibler (KL) divergence, we propose an alternative formulation for estimating the GMM parameters using the sliced Wasserstein distance, which gives rise to a new algorithm. Specifically, we propose minimizing the sliced-Wasserstein distance between the mixture model and the data distribution with respect to the GMM parameters. In contrast to the KL-divergence, the energy landscape for the sliced-Wasserstein distance is more well-behaved and therefore more suitable for a stochastic gradient descent scheme to obtain the optimal GMM parameters. We show that our formulation results in parameter estimates that are more robust to random initializations and demonstrate that it can estimate high-dimensional data distributions more faithfully than the EM algorithm
    • …
    corecore