131 research outputs found

    Applications of a New Proposal for Solving the "Problem of Time" to Some Simple Quantum Cosmological Models

    Get PDF
    We apply a recent proposal for defining states and observables in quantum gravity to simple models. First, we consider a Klein-Gordon particle in an ex- ternal potential in Minkowski space and compare our proposal to the theory ob- tained by deparametrizing with respect to a time slicing prior to quantiza- tion. We show explicitly that the dynamics of the deparametrization approach depends on the time slicing. Our proposal yields a dynamics independent of the choice of time slicing at intermediate times but after the potential is turned off, the dynamics does not return to the free particle dynamics. Next we apply our proposal to the closed Robertson-Walker quantum cosmology with a massless scalar field with the size of the universe as our time variable, so the only dynamical variable is the scalar field. We show that the resulting theory has the semi-classical behavior up to the classical turning point from expansion to contraction, i.e., given a classical solution which expands for much longer than the Planck time, there is a quantum state whose dynamical evolution closely approximates this classical solution during the expansion. However, when the "time" gets larger than the classical maximum, the scalar field be- comes "frozen" at its value at the maximum expansion. We also obtain similar results in the Taub model. In an Appendix we derive the form of the Wheeler- DeWitt equation for the Bianchi models by performing a proper quantum reduc- tion of the momentum constraints; this equation differs from the usual one ob- tained by solving the momentum constraints classically, prior to quantization.Comment: 30 pages, LaTeX 3 figures (postscript file or hard copy) available upon request, BUTP-94/1

    Threshold Bound States

    Full text link
    Relationships between the coupling constant and the binding energy of threshold bound states are obtained in a simple manner from an iterative algorithm for solving the eigenvalue problem. The absence of threshold bound states in higher dimensions can be easily understood

    Leptogenesis and Neutrino Oscillations Within A Predictive G(224)/SO(10)-Framework

    Full text link
    A framework based on an effective symmetry that is either G(224)= SU(2)_L x SU(2)_R xSU(4)^c or SO(10) has been proposed (a few years ago) that successfully describes the masses and mixings of all fermions including neutrinos, with seven predictions, in good accord with the data. Baryogenesis via leptogenesis is considered within this framework by allowing for natural phases (~ 1/20-1/2) in the entries of the Dirac and Majorana mass-matrices. It is shown that the framework leads quite naturally, for both thermal as well as non-thermal leptogenesis, to the desired magnitude for the baryon asymmetry. This result is obtained in full accord with the observed features of the atmospheric and solar neutrino oscillations, as well as with those of the quark and charged lepton masses and mixings, and the gravitino-constraint. Hereby one obtains a unified description of fermion masses, neutrino oscillations and baryogenesis (via leptogenesis) within a single predictive framework.Comment: Efficiency factor updated, some clarifications and new references added. 19 page

    The Similarity Hypothesis in General Relativity

    Full text link
    Self-similar models are important in general relativity and other fundamental theories. In this paper we shall discuss the ``similarity hypothesis'', which asserts that under a variety of physical circumstances solutions of these theories will naturally evolve to a self-similar form. We will find there is good evidence for this in the context of both spatially homogenous and inhomogeneous cosmological models, although in some cases the self-similar model is only an intermediate attractor. There are also a wide variety of situations, including critical pheneomena, in which spherically symmetric models tend towards self-similarity. However, this does not happen in all cases and it is it is important to understand the prerequisites for the conjecture.Comment: to be submitted to Gen. Rel. Gra

    Immunology of multiple sclerosis

    Get PDF
    Multiple sclerosis (MS) is an autoimmune disease of the central nervous system (CNS) leading to demyelination, axonal damage, and progressive neurologic disability. The development of MS is influenced by environmental factors, particularly the Epstein-Barr virus (EBV), and genetic factors, which include specific HLA types, particularly DRB1*1501-DQA1*0102-DQB1*0602, and a predisposition to autoimmunity in general. MS patients have increased circulating T-cell and antibody reactivity to myelin proteins and gangliosides. It is proposed that the role of EBV is to infect autoreactive B cells that then seed the CNS and promote the survival of autoreactive T cells there. It is also proposed that the clinical attacks of relapsing-remitting MS are orchestrated by myelin-reactive T cells entering the white matter of the CNS from the blood, and that the progressive disability in primary and secondary progressive MS is caused by the action of autoantibodies produced in the CNS by ­meningeal lymphoid follicles with germinal centers

    Blended Clustering for Health Data Mining

    Full text link
    Exploratory data analysis using data mining techniques is becoming more popular for investigating subtle relationships in health data, for which direct data collection trials would not be possible. Health data mining involving clustering for large complex data sets in such cases is often limited by insufficient key indicative variables. When a conventional clustering technique is then applied, the results may be too imprecise, or may be inappropriately clustered according to expectations. This paper suggests an approach which can offer greater range of choice for generating potential clusters of interest, from which a better outcome might in turn be obtained by aggregating the results. An example use case based on health services utilization characterization according to socio-demographic background is discussed and the blended clustering approach being taken for it is described

    The management of diabetic ketoacidosis in children

    Get PDF
    The object of this review is to provide the definitions, frequency, risk factors, pathophysiology, diagnostic considerations, and management recommendations for diabetic ketoacidosis (DKA) in children and adolescents, and to convey current knowledge of the causes of permanent disability or mortality from complications of DKA or its management, particularly the most common complication, cerebral edema (CE). DKA frequency at the time of diagnosis of pediatric diabetes is 10%–70%, varying with the availability of healthcare and the incidence of type 1 diabetes (T1D) in the community. Recurrent DKA rates are also dependent on medical services and socioeconomic circumstances. Management should be in centers with experience and where vital signs, neurologic status, and biochemistry can be monitored with sufficient frequency to prevent complications or, in the case of CE, to intervene rapidly with mannitol or hypertonic saline infusion. Fluid infusion should precede insulin administration (0.1 U/kg/h) by 1–2 hours; an initial bolus of 10–20 mL/kg 0.9% saline is followed by 0.45% saline calculated to supply maintenance and replace 5%–10% dehydration. Potassium (K) must be replaced early and sufficiently. Bicarbonate administration is contraindicated. The prevention of DKA at onset of diabetes requires an informed community and high index of suspicion; prevention of recurrent DKA, which is almost always due to insulin omission, necessitates a committed team effort

    Measurement of the View the tt production cross-section using eÎŒ events with b-tagged jets in pp collisions at √s = 13 TeV with the ATLAS detector

    Get PDF
    This paper describes a measurement of the inclusive top quark pair production cross-section (σttÂŻ) with a data sample of 3.2 fb−1 of proton–proton collisions at a centre-of-mass energy of √s = 13 TeV, collected in 2015 by the ATLAS detector at the LHC. This measurement uses events with an opposite-charge electron–muon pair in the final state. Jets containing b-quarks are tagged using an algorithm based on track impact parameters and reconstructed secondary vertices. The numbers of events with exactly one and exactly two b-tagged jets are counted and used to determine simultaneously σttÂŻ and the efficiency to reconstruct and b-tag a jet from a top quark decay, thereby minimising the associated systematic uncertainties. The cross-section is measured to be: σttÂŻ = 818 ± 8 (stat) ± 27 (syst) ± 19 (lumi) ± 12 (beam) pb, where the four uncertainties arise from data statistics, experimental and theoretical systematic effects, the integrated luminosity and the LHC beam energy, giving a total relative uncertainty of 4.4%. The result is consistent with theoretical QCD calculations at next-to-next-to-leading order. A fiducial measurement corresponding to the experimental acceptance of the leptons is also presented

    The performance of the jet trigger for the ATLAS detector during 2011 data taking

    Get PDF
    The performance of the jet trigger for the ATLAS detector at the LHC during the 2011 data taking period is described. During 2011 the LHC provided proton–proton collisions with a centre-of-mass energy of 7 TeV and heavy ion collisions with a 2.76 TeV per nucleon–nucleon collision energy. The ATLAS trigger is a three level system designed to reduce the rate of events from the 40 MHz nominal maximum bunch crossing rate to the approximate 400 Hz which can be written to offline storage. The ATLAS jet trigger is the primary means for the online selection of events containing jets. Events are accepted by the trigger if they contain one or more jets above some transverse energy threshold. During 2011 data taking the jet trigger was fully efficient for jets with transverse energy above 25 GeV for triggers seeded randomly at Level 1. For triggers which require a jet to be identified at each of the three trigger levels, full efficiency is reached for offline jets with transverse energy above 60 GeV. Jets reconstructed in the final trigger level and corresponding to offline jets with transverse energy greater than 60 GeV, are reconstructed with a resolution in transverse energy with respect to offline jets, of better than 4 % in the central region and better than 2.5 % in the forward direction
    • 

    corecore