4,480 research outputs found

    Classical computing, quantum computing, and Shor's factoring algorithm

    Get PDF
    This is an expository talk written for the Bourbaki Seminar. After a brief introduction, Section 1 discusses in the categorical language the structure of the classical deterministic computations. Basic notions of complexity icluding the P/NP problem are reviewed. Section 2 introduces the notion of quantum parallelism and explains the main issues of quantum computing. Section 3 is devoted to four quantum subroutines: initialization, quantum computing of classical Boolean functions, quantum Fourier transform, and Grover's search algorithm. The central Section 4 explains Shor's factoring algorithm. Section 5 relates Kolmogorov's complexity to the spectral properties of computable function. Appendix contributes to the prehistory of quantum computing.Comment: 27 pp., no figures, amste

    Numerically optimized Markovian coupling and mixing in one-dimensional maps

    Get PDF
    Algorithms are introduced that produce optimal Markovian couplings for large finite-state-space discrete-time Markov chains with sparse transition matrices; these algorithms are applied to some toy models motivated by fluid-dynamical mixing problems at high Peclét number. An alternative definition of the time-scale of a mixing process is suggested. Finally, these algorithms are applied to the problem of coupling diffusion processes in an acute-angled triangle, and some of the simplifications that occur in continuum coupling problems are discussed

    Measuring sets in infinite groups

    Full text link
    We are now witnessing a rapid growth of a new part of group theory which has become known as "statistical group theory". A typical result in this area would say something like ``a random element (or a tuple of elements) of a group G has a property P with probability p". The validity of a statement like that does, of course, heavily depend on how one defines probability on groups, or, equivalently, how one measures sets in a group (in particular, in a free group). We hope that new approaches to defining probabilities on groups outlined in this paper create, among other things, an appropriate framework for the study of the "average case" complexity of algorithms on groups.Comment: 22 page

    Instability, complexity and evolution

    No full text

    Kolmogorov Complexity in perspective. Part I: Information Theory and Randomnes

    Get PDF
    We survey diverse approaches to the notion of information: from Shannon entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov complexity are presented: randomness and classification. The survey is divided in two parts in the same volume. Part I is dedicated to information theory and the mathematical formalization of randomness based on Kolmogorov complexity. This last application goes back to the 60's and 70's with the work of Martin-L\"of, Schnorr, Chaitin, Levin, and has gained new impetus in the last years.Comment: 40 page

    How Many Subpopulations is Too Many? Exponential Lower Bounds for Inferring Population Histories

    Full text link
    Reconstruction of population histories is a central problem in population genetics. Existing coalescent-based methods, like the seminal work of Li and Durbin (Nature, 2011), attempt to solve this problem using sequence data but have no rigorous guarantees. Determining the amount of data needed to correctly reconstruct population histories is a major challenge. Using a variety of tools from information theory, the theory of extremal polynomials, and approximation theory, we prove new sharp information-theoretic lower bounds on the problem of reconstructing population structure -- the history of multiple subpopulations that merge, split and change sizes over time. Our lower bounds are exponential in the number of subpopulations, even when reconstructing recent histories. We demonstrate the sharpness of our lower bounds by providing algorithms for distinguishing and learning population histories with matching dependence on the number of subpopulations. Along the way and of independent interest, we essentially determine the optimal number of samples needed to learn an exponential mixture distribution information-theoretically, proving the upper bound by analyzing natural (and efficient) algorithms for this problem.Comment: 38 pages, Appeared in RECOMB 201
    corecore