160 research outputs found

    Spatially Averaged Quantum Inequalities Do Not Exist in Four-Dimensional Spacetime

    Get PDF
    We construct a particular class of quantum states for a massless, minimally coupled free scalar field which are of the form of a superposition of the vacuum and multi-mode two-particle states. These states can exhibit local negative energy densities. Furthermore, they can produce an arbitrarily large amount of negative energy in a given region of space at a fixed time. This class of states thus provides an explicit counterexample to the existence of a spatially averaged quantum inequality in four-dimensional spacetime.Comment: 13 pages, 1 figure, minor corrections and added comment

    Quantum inequalities for the free Rarita-Schwinger fields in flat spacetime

    Full text link
    Using the methods developed by Fewster and colleagues, we derive a quantum inequality for the free massive spin-32{3\over 2} Rarita-Schwinger fields in the four dimensional Minkowski spacetime. Our quantum inequality bound for the Rarita-Schwinger fields is weaker, by a factor of 2, than that for the spin-12{1\over 2} Dirac fields. This fact along with other quantum inequalities obtained by various other authors for the fields of integer spin (bosonic fields) using similar methods lead us to conjecture that, in the flat spacetime, separately for bosonic and fermionic fields, the quantum inequality bound gets weaker as the the number of degrees of freedom of the field increases. A plausible physical reason might be that the more the number of field degrees of freedom, the more freedom one has to create negative energy, therefore, the weaker the quantum inequality bound.Comment: Revtex, 11 pages, to appear in PR

    The Quantum Interest Conjecture

    Get PDF
    Although quantum field theory allows local negative energy densities and fluxes, it also places severe restrictions upon the magnitude and extent of the negative energy. The restrictions take the form of quantum inequalities. These inequalities imply that a pulse of negative energy must not only be followed by a compensating pulse of positive energy, but that the temporal separation between the pulses is inversely proportional to their amplitude. In an earlier paper we conjectured that there is a further constraint upon a negative and positive energy delta-function pulse pair. This conjecture (the quantum interest conjecture) states that a positive energy pulse must overcompensate the negative energy pulse by an amount which is a monotonically increasing function of the pulse separation. In the present paper we prove the conjecture for massless quantized scalar fields in two and four-dimensional flat spacetime, and show that it is implied by the quantum inequalities.Comment: 17 pages, Latex, 3 figures, uses eps

    Feasible combinatorial matrix theory

    Full text link
    We show that the well-known Konig's Min-Max Theorem (KMM), a fundamental result in combinatorial matrix theory, can be proven in the first order theory \LA with induction restricted to Σ1B\Sigma_1^B formulas. This is an improvement over the standard textbook proof of KMM which requires Π2B\Pi_2^B induction, and hence does not yield feasible proofs --- while our new approach does. \LA is a weak theory that essentially captures the ring properties of matrices; however, equipped with Σ1B\Sigma_1^B induction \LA is capable of proving KMM, and a host of other combinatorial properties such as Menger's, Hall's and Dilworth's Theorems. Therefore, our result formalizes Min-Max type of reasoning within a feasible framework

    The Mathematical Universe

    Full text link
    I explore physics implications of the External Reality Hypothesis (ERH) that there exists an external physical reality completely independent of us humans. I argue that with a sufficiently broad definition of mathematics, it implies the Mathematical Universe Hypothesis (MUH) that our physical world is an abstract mathematical structure. I discuss various implications of the ERH and MUH, ranging from standard physics topics like symmetries, irreducible representations, units, free parameters, randomness and initial conditions to broader issues like consciousness, parallel universes and Godel incompleteness. I hypothesize that only computable and decidable (in Godel's sense) structures exist, which alleviates the cosmological measure problem and help explain why our physical laws appear so simple. I also comment on the intimate relation between mathematical structures, computations, simulations and physical systems.Comment: Replaced to match accepted Found. Phys. version, 31 pages, 5 figs; more details at http://space.mit.edu/home/tegmark/toe.htm

    Environment-Induced Decoherence and the Transition From Quantum to Classical

    Get PDF
    We study dynamics of quantum open systems, paying special attention to those aspects of their evolution which are relevant to the transition from quantum to classical. We begin with a discussion of the conditional dynamics of simple systems. The resulting models are straightforward but suffice to illustrate basic physical ideas behind quantum measurements and decoherence. To discuss decoherence and environment-induced superselection einselection in a more general setting, we sketch perturbative as well as exact derivations of several master equations valid for various systems. Using these equations we study einselection employing the general strategy of the predictability sieve. Assumptions that are usually made in the discussion of decoherence are critically reexamined along with the ``standard lore'' to which they lead. Restoration of quantum-classical correspondence in systems that are classically chaotic is discussed. The dynamical second law -it is shown- can be traced to the same phenomena that allow for the restoration of the correspondence principle in decohering chaotic systems (where it is otherwise lost on a very short time-scale). Quantum error correction is discussed as an example of an anti-decoherence strategy. Implications of decoherence and einselection for the interpretation of quantum theory are briefly pointed out.Comment: 80 pages, 7 figures included, Lectures given by both authors at the 72nd Les Houches Summer School on "Coherent Matter Waves", July-August 199

    Search for a W' boson decaying to a bottom quark and a top quark in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    Results are presented from a search for a W' boson using a dataset corresponding to 5.0 inverse femtobarns of integrated luminosity collected during 2011 by the CMS experiment at the LHC in pp collisions at sqrt(s)=7 TeV. The W' boson is modeled as a heavy W boson, but different scenarios for the couplings to fermions are considered, involving both left-handed and right-handed chiral projections of the fermions, as well as an arbitrary mixture of the two. The search is performed in the decay channel W' to t b, leading to a final state signature with a single lepton (e, mu), missing transverse energy, and jets, at least one of which is tagged as a b-jet. A W' boson that couples to fermions with the same coupling constant as the W, but to the right-handed rather than left-handed chiral projections, is excluded for masses below 1.85 TeV at the 95% confidence level. For the first time using LHC data, constraints on the W' gauge coupling for a set of left- and right-handed coupling combinations have been placed. These results represent a significant improvement over previously published limits.Comment: Submitted to Physics Letters B. Replaced with version publishe

    Search for the standard model Higgs boson decaying into two photons in pp collisions at sqrt(s)=7 TeV

    Get PDF
    A search for a Higgs boson decaying into two photons is described. The analysis is performed using a dataset recorded by the CMS experiment at the LHC from pp collisions at a centre-of-mass energy of 7 TeV, which corresponds to an integrated luminosity of 4.8 inverse femtobarns. Limits are set on the cross section of the standard model Higgs boson decaying to two photons. The expected exclusion limit at 95% confidence level is between 1.4 and 2.4 times the standard model cross section in the mass range between 110 and 150 GeV. The analysis of the data excludes, at 95% confidence level, the standard model Higgs boson decaying into two photons in the mass range 128 to 132 GeV. The largest excess of events above the expected standard model background is observed for a Higgs boson mass hypothesis of 124 GeV with a local significance of 3.1 sigma. The global significance of observing an excess with a local significance greater than 3.1 sigma anywhere in the search range 110-150 GeV is estimated to be 1.8 sigma. More data are required to ascertain the origin of this excess.Comment: Submitted to Physics Letters

    Measurement of the Lambda(b) cross section and the anti-Lambda(b) to Lambda(b) ratio with Lambda(b) to J/Psi Lambda decays in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    The Lambda(b) differential production cross section and the cross section ratio anti-Lambda(b)/Lambda(b) are measured as functions of transverse momentum pt(Lambda(b)) and rapidity abs(y(Lambda(b))) in pp collisions at sqrt(s) = 7 TeV using data collected by the CMS experiment at the LHC. The measurements are based on Lambda(b) decays reconstructed in the exclusive final state J/Psi Lambda, with the subsequent decays J/Psi to an opposite-sign muon pair and Lambda to proton pion, using a data sample corresponding to an integrated luminosity of 1.9 inverse femtobarns. The product of the cross section times the branching ratio for Lambda(b) to J/Psi Lambda versus pt(Lambda(b)) falls faster than that of b mesons. The measured value of the cross section times the branching ratio for pt(Lambda(b)) > 10 GeV and abs(y(Lambda(b))) < 2.0 is 1.06 +/- 0.06 +/- 0.12 nb, and the integrated cross section ratio for anti-Lambda(b)/Lambda(b) is 1.02 +/- 0.07 +/- 0.09, where the uncertainties are statistical and systematic, respectively.Comment: Submitted to Physics Letters
    • …
    corecore