51,416 research outputs found

    Two Compact Incremental Prime Sieves

    Get PDF
    A prime sieve is an algorithm that finds the primes up to a bound nn. We say that a prime sieve is incremental, if it can quickly determine if n+1n+1 is prime after having found all primes up to nn. We say a sieve is compact if it uses roughly n\sqrt{n} space or less. In this paper we present two new results: (1) We describe the rolling sieve, a practical, incremental prime sieve that takes O(nloglogn)O(n\log\log n) time and O(nlogn)O(\sqrt{n}\log n) bits of space, and (2) We show how to modify the sieve of Atkin and Bernstein (2004) to obtain a sieve that is simultaneously sublinear, compact, and incremental. The second result solves an open problem given by Paul Pritchard in 1994

    Nirvana

    Full text link

    Sieving for pseudosquares and pseudocubes in parallel using doubly-focused enumeration and wheel datastructures

    Full text link
    We extend the known tables of pseudosquares and pseudocubes, discuss the implications of these new data on the conjectured distribution of pseudosquares and pseudocubes, and present the details of the algorithm used to do this work. Our algorithm is based on the space-saving wheel data structure combined with doubly-focused enumeration, run in parallel on a cluster supercomputer

    University Scholar Series: Jonathan Roth

    Get PDF
    Roman Warfare On April 13, 2011 Jonathan Roth spoke in the University Scholar Series hosted by Provost Gerry Selter at the Dr. Martin Luther King, Jr. Library. Jonathan Roth is a Professor in the History Department at SJSU. In this seminar, he examines the evolution of Roman war over its thousand-year history. He highlights the changing arms and equipment of the soldiers, unit organization and command structure, and the wars and battles of each era.https://scholarworks.sjsu.edu/uss/1008/thumbnail.jp

    Sampling arbitrary photon-added or photon-subtracted squeezed states is in the same complexity class as boson sampling

    Full text link
    Boson sampling is a simple model for non-universal linear optics quantum computing using far fewer physical resources than universal schemes. An input state comprising vacuum and single photon states is fed through a Haar-random linear optics network and sampled at the output using coincidence photodetection. This problem is strongly believed to be classically hard to simulate. We show that an analogous procedure implements the same problem, using photon-added or -subtracted squeezed vacuum states (with arbitrary squeezing), where sampling at the output is performed via parity measurements. The equivalence is exact and independent of the squeezing parameter, and hence provides an entire class of new quantum states of light in the same complexity class as boson sampling.Comment: 5 pages, 2 figure

    Boson sampling with displaced single-photon Fock states versus single-photon-added coherent states---The quantum-classical divide and computational-complexity transitions in linear optics

    Full text link
    Boson sampling is a specific quantum computation, which is likely hard to implement efficiently on a classical computer. The task is to sample the output photon number distribution of a linear optical interferometric network, which is fed with single-photon Fock state inputs. A question that has been asked is if the sampling problems associated with any other input quantum states of light (other than the Fock states) to a linear optical network and suitable output detection strategies are also of similar computational complexity as boson sampling. We consider the states that differ from the Fock states by a displacement operation, namely the displaced Fock states and the photon-added coherent states. It is easy to show that the sampling problem associated with displaced single-photon Fock states and a displaced photon number detection scheme is in the same complexity class as boson sampling for all values of displacement. On the other hand, we show that the sampling problem associated with single-photon-added coherent states and the same displaced photon number detection scheme demonstrates a computational complexity transition. It transitions from being just as hard as boson sampling when the input coherent amplitudes are sufficiently small, to a classically simulatable problem in the limit of large coherent amplitudes.Comment: 7 pages, 3 figures; published versio
    corecore