23,736 research outputs found

    Bose-Einstein Correlations in e+e- -> W+W- at a Linear Collider

    Get PDF
    We show that the most popular method to simulate Bose-Einstein (BE) interference effects predicts negligible correlations between identical pions originating from the hadronic decay of different W's produced in e+e- -> W+W- -> 4 jets at typical linear collider energies.Comment: 5 pages, 2 eps figures, Proccedings of the Workshop "Physics Studies for a Future Linear Collider", QCD Working Group, 2000, DESY 123

    Justifications in Constraint Handling Rules for Logical Retraction in Dynamic Algorithms

    Full text link
    We present a straightforward source-to-source transformation that introduces justifications for user-defined constraints into the CHR programming language. Then a scheme of two rules suffices to allow for logical retraction (deletion, removal) of constraints during computation. Without the need to recompute from scratch, these rules remove not only the constraint but also undo all consequences of the rule applications that involved the constraint. We prove a confluence result concerning the rule scheme and show its correctness. When algorithms are written in CHR, constraints represent both data and operations. CHR is already incremental by nature, i.e. constraints can be added at runtime. Logical retraction adds decrementality. Hence any algorithm written in CHR with justifications will become fully dynamic. Operations can be undone and data can be removed at any point in the computation without compromising the correctness of the result. We present two classical examples of dynamic algorithms, written in our prototype implementation of CHR with justifications that is available online: maintaining the minimum of a changing set of numbers and shortest paths in a graph whose edges change.Comment: Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854

    Quantum Weakly Nondeterministic Communication Complexity

    Full text link
    We study the weakest model of quantum nondeterminism in which a classical proof has to be checked with probability one by a quantum protocol. We show the first separation between classical nondeterministic communication complexity and this model of quantum nondeterministic communication complexity for a total function. This separation is quadratic.Comment: 12 pages. v3: minor correction

    Optimal quantum sample complexity of learning algorithms

    Get PDF

    Tight Bounds for Quantum Phase Estimation and Related Problems

    Get PDF

    Maps of zeroes of the grand canonical partition function in a statistical model of high energy collisions

    Full text link
    Theorems on zeroes of the truncated generating function in the complex plane are reviewed. When examined in the framework of a statistical model of high energy collisions based on the negative binomial (Pascal) multiplicity distribution, these results lead to maps of zeroes of the grand canonical partition function which allow to interpret in a novel way different classes of events in pp collisions at LHC c.m. energies.Comment: 17 pages, figures (ps included); added references, some figures enlarged. To appear in J. Phys.

    Lattice Boltzmann models for non-ideal fluids with arrested phase-separation

    Full text link
    The effects of mid-range repulsion in Lattice Boltzmann models on the coalescence/breakup behaviour of single-component, non-ideal fluids are investigated. It is found that mid-range repulsive interactions allow the formation of spray-like, multi-droplet configurations, with droplet size directly related to the strength of the repulsive interaction. The simulations show that just a tiny ten-percent of mid-range repulsive pseudo-energy can boost the surface/volume ratio of the phase- separated fluid by nearly two orders of magnitude. Drawing upon a formal analogy with magnetic Ising systems, a pseudo-potential energy is defined, which is found to behave like a quasi-conserved quantity for most of the time-evolution. This offers a useful quantitative indicator of the stability of the various configurations, thus helping the task of their interpretation and classification. The present approach appears to be a promising tool for the computational modelling of complex flow phenomena, such as atomization, spray formation and micro-emulsions, break-up phenomena and possibly glassy-like systems as well.Comment: 12 pages, 9 figure

    Optimal quantum sample complexity of learning algorithms

    Get PDF
    In learning theory, the VC dimension of a concept class C is the most common way to measure its “richness.” A fundamental result says that the number of examples needed to learn an unknown target concept c∈C under an unknown distribution D, is tightly determined by the VC dimension d of the concept class C. Specifically, in the PAC model Θ(dϵ+log(1/δ)ϵ) examples are necessary and sufficient for a learner to output, with probability 1−δ, a hypothesis h that is ϵ-close to the target concept c (measured under D). In the related agnostic model, where the samples need not come from a c∈C, we know that Θ(dϵ2+log(1/δ)ϵ2) examples are necessary and sufficient to output an hypothesis h∈C whose error is at most ϵ worse than the error of the best concept in C. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson (1999), who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atıcı and Servedio (2005), improved by Zhang (2010), showed that in the PAC setting (where the learner has to succeed for every distribution), quantum examples cannot be much more powerful: the required number of quantum examples is Ω(d1−ηϵ+d+log(1/δ)ϵ) for arbitrarily small constant η>0. Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two proof approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a log(d/ϵ) factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the “Pretty Good Measurement” on the quantum state-identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors for every concept class C

    Exponential Separation of Quantum and Classical Online Space Complexity

    Full text link
    Although quantum algorithms realizing an exponential time speed-up over the best known classical algorithms exist, no quantum algorithm is known performing computation using less space resources than classical algorithms. In this paper, we study, for the first time explicitly, space-bounded quantum algorithms for computational problems where the input is given not as a whole, but bit by bit. We show that there exist such problems that a quantum computer can solve using exponentially less work space than a classical computer. More precisely, we introduce a very natural and simple model of a space-bounded quantum online machine and prove an exponential separation of classical and quantum online space complexity, in the bounded-error setting and for a total language. The language we consider is inspired by a communication problem (the set intersection function) that Buhrman, Cleve and Wigderson used to show an almost quadratic separation of quantum and classical bounded-error communication complexity. We prove that, in the framework of online space complexity, the separation becomes exponential.Comment: 13 pages. v3: minor change
    corecore