5,823 research outputs found

    Analysis of circuit imperfections in BosonSampling

    Full text link
    BosonSampling is a problem where a quantum computer offers a provable speedup over classical computers. Its main feature is that it can be solved with current linear optics technology, without the need for a full quantum computer. In this work, we investigate whether an experimentally realistic BosonSampler can really solve BosonSampling without any fault-tolerance mechanism. More precisely, we study how the unavoidable errors linked to an imperfect calibration of the optical elements affect the final result of the computation. We show that the fidelity of each optical element must be at least 1−O(1/n2)1 - O(1/n^2), where nn refers to the number of single photons in the scheme. Such a requirement seems to be achievable with state-of-the-art equipment.Comment: 20 pages, 7 figures, v2: new title, to appear in QI

    Robustness of One-Dimensional Photonic Bandgaps Under Random Variations of Geometrical Parameters

    Get PDF
    The supercell method is used to study the variation of the photonic bandgaps in one-dimensional photonic crystals under random perturbations to thicknesses of the layers. The results of both plane wave and analytical band structure and density of states calculations are presented along with the transmission cofficient as the level of randomness and the supercell size is increased. It is found that higher bandgaps disappear first as the randomness is gradually increased. The lowest bandgap is found to persist up to a randomness level of 55 percent.Comment: Submitted to Physical Review B on April 8 200

    Two-Source Condensers with Low Error and Small Entropy Gap via Entropy-Resilient Functions

    Get PDF
    In their seminal work, Chattopadhyay and Zuckerman (STOC\u2716) constructed a two-source extractor with error epsilon for n-bit sources having min-entropy {polylog}(n/epsilon). Unfortunately, the construction\u27s running-time is {poly}(n/epsilon), which means that with polynomial-time constructions, only polynomially-small errors are possible. Our main result is a {poly}(n,log(1/epsilon))-time computable two-source condenser. For any k >= {polylog}(n/epsilon), our condenser transforms two independent (n,k)-sources to a distribution over m = k-O(log(1/epsilon)) bits that is epsilon-close to having min-entropy m - o(log(1/epsilon)). Hence, achieving entropy gap of o(log(1/epsilon)). The bottleneck for obtaining low error in recent constructions of two-source extractors lies in the use of resilient functions. Informally, this is a function that receives input bits from r players with the property that the function\u27s output has small bias even if a bounded number of corrupted players feed adversarial inputs after seeing the inputs of the other players. The drawback of using resilient functions is that the error cannot be smaller than ln r/r. This, in return, forces the running time of the construction to be polynomial in 1/epsilon. A key component in our construction is a variant of resilient functions which we call entropy-resilient functions. This variant can be seen as playing the above game for several rounds, each round outputting one bit. The goal of the corrupted players is to reduce, with as high probability as they can, the min-entropy accumulated throughout the rounds. We show that while the bias decreases only polynomially with the number of players in a one-round game, their success probability decreases exponentially in the entropy gap they are attempting to incur in a repeated game
    • …
    corecore