880 research outputs found

    On the decomposition of sets of reals to borel sets

    Get PDF

    Colonisation rates of Streptococcus pyogenes and Staphylococcus aureus in the oropharynx of a young adult population

    Get PDF
    ABSTRACTThere are very few reports on the rates of oropharyngeal colonisation by Streptococcus pyogenes and Staphylococcus aureus in young adults. The present study found colonisation rates of 9.6% and 26.2%, respectively. These rates are two-fold higher than historical rates, indicating that these organisms may be more prevalent than thought previously. This finding may have important clinical consequences in certain populations, and requires further investigation

    Dynamic Antarctic ice sheet during the early to mid-Miocene.

    Get PDF
    Geological data indicate that there were major variations in Antarctic ice sheet volume and extent during the early to mid-Miocene. Simulating such large-scale changes is problematic because of a strong hysteresis effect, which results in stability once the ice sheets have reached continental size. A relatively narrow range of atmospheric CO2 concentrations indicated by proxy records exacerbates this problem. Here, we are able to simulate large-scale variability of the early to mid-Miocene Antarctic ice sheet because of three developments in our modeling approach. (i) We use a climate-ice sheet coupling method utilizing a high-resolution atmospheric component to account for ice sheet-climate feedbacks. (ii) The ice sheet model includes recently proposed mechanisms for retreat into deep subglacial basins caused by ice-cliff failure and ice-shelf hydrofracture. (iii) We account for changes in the oxygen isotopic composition of the ice sheet by using isotope-enabled climate and ice sheet models. We compare our modeling results with ice-proximal records emerging from a sedimentological drill core from the Ross Sea (Andrill-2A) that is presented in a companion article. The variability in Antarctic ice volume that we simulate is equivalent to a seawater oxygen isotope signal of 0.52-0.66‰, or a sea level equivalent change of 30-36 m, for a range of atmospheric CO2 between 280 and 500 ppm and a changing astronomical configuration. This result represents a substantial advance in resolving the long-standing model data conflict of Miocene Antarctic ice sheet and sea level variability

    The global carbon budget 1959-2011

    Get PDF
    Accurate assessments of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the climate policy process, and project future climate change. Present-day analysis requires the combination of a range of data, algorithms, statistics and model estimates and their interpretation by a broad scientific community. Here we describe datasets and a methodology developed by the global carbon cycle science community to quantify all major components of the global carbon budget, including their uncertainties. We discuss changes compared to previous estimates, consistency within and among components, and methodology and data limitations. CO2 emissions from fossil fuel combustion and cement production (EFF) are based on energy statistics, while emissions from Land-Use Change (ELUC), including deforestation, are based on combined evidence from land cover change data, fire activity in regions undergoing deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. Finally, the global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms. For the last decade available (2002–2011), EFF was 8.3 ± 0.4 PgC yr−1, ELUC 1.0 ± 0.5 PgC yr−1, GATM 4.3 ± 0.1PgC yr−1, SOCEAN 2.5 ± 0.5 PgC yr−1, and SLAND 2.6 ± 0.8 PgC yr−1. For year 2011 alone, EFF was 9.5 ± 0.5 PgC yr−1, 3.0 percent above 2010, reflecting a continued trend in these emissions; ELUC was 0.9 ± 0.5 PgC yr−1, approximately constant throughout the decade; GATM was 3.6 ± 0.2 PgC yr−1, SOCEAN was 2.7 ± 0.5 PgC yr−1, and SLAND was 4.1 ± 0.9 PgC yr−1. GATM was low in 2011 compared to the 2002–2011 average because of a high uptake by the land probably in response to natural climate variability associated to La Niña conditions in the Pacific Ocean. The global atmospheric CO2 concentration reached 391.31 ± 0.13 ppm at the end of year 2011. We estimate that EFF will have increased by 2.6% (1.9–3.5%) in 2012 based on projections of gross world product and recent changes in the carbon intensity of the economy. All uncertainties are reported as ±1 sigma (68% confidence assuming Gaussian error distributions that the real value lies within the given interval), reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. This paper is intended to provide a baseline to keep track of annual carbon budgets in the future

    Thermal Density Functional Theory in Context

    Full text link
    This chapter introduces thermal density functional theory, starting from the ground-state theory and assuming a background in quantum mechanics and statistical mechanics. We review the foundations of density functional theory (DFT) by illustrating some of its key reformulations. The basics of DFT for thermal ensembles are explained in this context, as are tools useful for analysis and development of approximations. We close by discussing some key ideas relating thermal DFT and the ground state. This review emphasizes thermal DFT's strengths as a consistent and general framework.Comment: Submitted to Spring Verlag as chapter in "Computational Challenges in Warm Dense Matter", F. Graziani et al. ed

    Structural Probe of a Glass Forming Liquid: Generalized Compressibility

    Full text link
    We introduce a new quantity to probe the glass transition. This quantity is a linear generalized compressibility which depends solely on the positions of the particles. We have performed a molecular dynamics simulation on a glass forming liquid consisting of a two component mixture of soft spheres in three dimensions. As the temperature is lowered (or as the density is increased), the generalized compressibility drops sharply at the glass transition, with the drop becoming more and more abrupt as the measurement time increases. At our longest measurement times, the drop occurs approximately at the mode coupling temperature TCT_C. The drop in the linear generalized compressibility occurs at the same temperature as the peak in the specific heat. By examining the inherent structure energy as a function of temperature, we find that our results are consistent with the kinetic view of the glass transition in which the system falls out of equilibrium. We find no size dependence and no evidence for a second order phase transition though this does not exclude the possibility of a phase transition below the observed glass transition temperature. We discuss the relation between the linear generalized compressibility and the ordinary isothermal compressibility as well as the static structure factor.Comment: 18 pages, Latex, 26 encapsulated postscript figures, revised paper is shorter, to appear in Phys. Rev.

    Coulomb Effects on Electromagnetic Pair Production in Ultrarelativistic Heavy-Ion Collisions

    Get PDF
    We discuss the implications of the eikonal amplitude on the pair production probability in ultrarelativistic heavy-ion transits. In this context the Weizs\"acker-Williams method is shown to be exact in the ultrarelativistic limit, irrespective of the produced particles' mass. A new equivalent single-photon distribution is derived which correctly accounts for the Coulomb distortions. As an immediate application, consequences for unitarity violation in photo-dissociation processes in peripheral heavy-ion encounters are discussed.Comment: 13 pages, 4 .eps figure

    How spiking neurons give rise to a temporal-feature map

    Get PDF
    A temporal-feature map is a topographic neuronal representation of temporal attributes of phenomena or objects that occur in the outside world. We explain the evolution of such maps by means of a spike-based Hebbian learning rule in conjunction with a presynaptically unspecific contribution in that, if a synapse changes, then all other synapses connected to the same axon change by a small fraction as well. The learning equation is solved for the case of an array of Poisson neurons. We discuss the evolution of a temporal-feature map and the synchronization of the single cells’ synaptic structures, in dependence upon the strength of presynaptic unspecific learning. We also give an upper bound for the magnitude of the presynaptic interaction by estimating its impact on the noise level of synaptic growth. Finally, we compare the results with those obtained from a learning equation for nonlinear neurons and show that synaptic structure formation may profit from the nonlinearity

    Quantum cellular automata quantum computing with endohedral fullerenes

    Get PDF
    We present a scheme to perform universal quantum computation using global addressing techniques as applied to a physical system of endohedrally doped fullerenes. The system consists of an ABAB linear array of Group V endohedrally doped fullerenes. Each molecule spin site consists of a nuclear spin coupled via a Hyperfine interaction to an electron spin. The electron spin of each molecule is in a quartet ground state S=3/2S=3/2. Neighboring molecular electron spins are coupled via a magnetic dipole interaction. We find that an all-electron construction of a quantum cellular automata is frustrated due to the degeneracy of the electronic transitions. However, we can construct a quantum celluar automata quantum computing architecture using these molecules by encoding the quantum information on the nuclear spins while using the electron spins as a local bus. We deduce the NMR and ESR pulses required to execute the basic cellular automata operation and obtain a rough figure of merit for the the number of gate operations per decoherence time. We find that this figure of merit compares well with other physical quantum computer proposals. We argue that the proposed architecture meets well the first four DiVincenzo criteria and we outline various routes towards meeting the fifth criteria: qubit readout.Comment: 16 pages, Latex, 5 figures, See http://planck.thphys.may.ie/QIPDDF/ submitted to Phys. Rev.
    • …
    corecore