1,083 research outputs found

    Measurement of the top quark mass using the matrix element technique in dilepton final states

    Get PDF
    We present a measurement of the top quark mass in pp¯ collisions at a center-of-mass energy of 1.96 TeV at the Fermilab Tevatron collider. The data were collected by the D0 experiment corresponding to an integrated luminosity of 9.7  fb−1. The matrix element technique is applied to tt¯ events in the final state containing leptons (electrons or muons) with high transverse momenta and at least two jets. The calibration of the jet energy scale determined in the lepton+jets final state of tt¯ decays is applied to jet energies. This correction provides a substantial reduction in systematic uncertainties. We obtain a top quark mass of mt=173.93±1.84  GeV

    Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at √ s = 8 TeV with the ATLAS detector

    Get PDF
    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses 20.3 fb−1 of √ s = 8 TeV data collected in 2012 with the ATLAS detector at the LHC. Events are required to have at least one jet with pT > 120 GeV and no leptons. Nine signal regions are considered with increasing missing transverse momentum requirements between Emiss T > 150 GeV and Emiss T > 700 GeV. Good agreement is observed between the number of events in data and Standard Model expectations. The results are translated into exclusion limits on models with either large extra spatial dimensions, pair production of weakly interacting dark matter candidates, or production of very light gravitinos in a gauge-mediated supersymmetric model. In addition, limits on the production of an invisibly decaying Higgs-like boson leading to similar topologies in the final state are presente

    Azimuthal anisotropy of charged particles at high transverse momenta in PbPb collisions at sqrt(s[NN]) = 2.76 TeV

    Get PDF
    The azimuthal anisotropy of charged particles in PbPb collisions at nucleon-nucleon center-of-mass energy of 2.76 TeV is measured with the CMS detector at the LHC over an extended transverse momentum (pt) range up to approximately 60 GeV. The data cover both the low-pt region associated with hydrodynamic flow phenomena and the high-pt region where the anisotropies may reflect the path-length dependence of parton energy loss in the created medium. The anisotropy parameter (v2) of the particles is extracted by correlating charged tracks with respect to the event-plane reconstructed by using the energy deposited in forward-angle calorimeters. For the six bins of collision centrality studied, spanning the range of 0-60% most-central events, the observed v2 values are found to first increase with pt, reaching a maximum around pt = 3 GeV, and then to gradually decrease to almost zero, with the decline persisting up to at least pt = 40 GeV over the full centrality range measured.Comment: Replaced with published version. Added journal reference and DO

    Search for new physics with same-sign isolated dilepton events with jets and missing transverse energy

    Get PDF
    A search for new physics is performed in events with two same-sign isolated leptons, hadronic jets, and missing transverse energy in the final state. The analysis is based on a data sample corresponding to an integrated luminosity of 4.98 inverse femtobarns produced in pp collisions at a center-of-mass energy of 7 TeV collected by the CMS experiment at the LHC. This constitutes a factor of 140 increase in integrated luminosity over previously published results. The observed yields agree with the standard model predictions and thus no evidence for new physics is found. The observations are used to set upper limits on possible new physics contributions and to constrain supersymmetric models. To facilitate the interpretation of the data in a broader range of new physics scenarios, information on the event selection, detector response, and efficiencies is provided.Comment: Published in Physical Review Letter

    Compressed representation of a partially defined integer function over multiple arguments

    Get PDF
    In OLAP (OnLine Analitical Processing) data are analysed in an n-dimensional cube. The cube may be represented as a partially defined function over n arguments. Considering that often the function is not defined everywhere, we ask: is there a known way of representing the function or the points in which it is defined, in a more compact manner than the trivial one

    Measurement of jet fragmentation into charged particles in pp and PbPb collisions at sqrt(s[NN]) = 2.76 TeV

    Get PDF
    Jet fragmentation in pp and PbPb collisions at a centre-of-mass energy of 2.76 TeV per nucleon pair was studied using data collected with the CMS detector at the LHC. Fragmentation functions are constructed using charged-particle tracks with transverse momenta pt > 4 GeV for dijet events with a leading jet of pt > 100 GeV. The fragmentation functions in PbPb events are compared to those in pp data as a function of collision centrality, as well as dijet-pt imbalance. Special emphasis is placed on the most central PbPb events including dijets with unbalanced momentum, indicative of energy loss of the hard scattered parent partons. The fragmentation patterns for both the leading and subleading jets in PbPb collisions agree with those seen in pp data at 2.76 TeV. The results provide evidence that, despite the large parton energy loss observed in PbPb collisions, the partition of the remaining momentum within the jet cone into high-pt particles is not strongly modified in comparison to that observed for jets in vacuum.Comment: Submitted to the Journal of High Energy Physic

    Automated Detection of External Ventricular and Lumbar Drain-Related Meningitis Using Laboratory and Microbiology Results and Medication Data

    Get PDF
    OBJECTIVE: Monitoring of healthcare-associated infection rates is important for infection control and hospital benchmarking. However, manual surveillance is time-consuming and susceptible to error. The aim was, therefore, to develop a prediction model to retrospectively detect drain-related meningitis (DRM), a frequently occurring nosocomial infection, using routinely collected data from a clinical data warehouse. METHODS: As part of the hospital infection control program, all patients receiving an external ventricular (EVD) or lumbar drain (ELD) (2004 to 2009; n = 742) had been evaluated for the development of DRM through chart review and standardized diagnostic criteria by infection control staff; this was the reference standard. Children, patients dying <24 hours after drain insertion or with <1 day follow-up and patients with infection at the time of insertion or multiple simultaneous drains were excluded. Logistic regression was used to develop a model predicting the occurrence of DRM. Missing data were imputed using multiple imputation. Bootstrapping was applied to increase generalizability. RESULTS: 537 patients remained after application of exclusion criteria, of which 82 developed DRM (13.5/1000 days at risk). The automated model to detect DRM included the number of drains placed, drain type, blood leukocyte count, C-reactive protein, cerebrospinal fluid leukocyte count and culture result, number of antibiotics started during admission, and empiric antibiotic therapy. Discriminatory power of this model was excellent (area under the ROC curve 0.97). The model achieved 98.8% sensitivity (95% CI 88.0% to 99.9%) and specificity of 87.9% (84.6% to 90.8%). Positive and negative predictive values were 56.9% (50.8% to 67.9%) and 99.9% (98.6% to 99.9%), respectively. Predicted yearly infection rates concurred with observed infection rates. CONCLUSION: A prediction model based on multi-source data stored in a clinical data warehouse could accurately quantify rates of DRM. Automated detection using this statistical approach is feasible and could be applied to other nosocomial infections

    Predicting Hospital-Acquired Infections by Scoring System with Simple Parameters

    Get PDF
    BACKGROUND: Hospital-acquired infections (HAI) are associated with increased attributable morbidity, mortality, prolonged hospitalization, and economic costs. A simple, reliable prediction model for HAI has great clinical relevance. The objective of this study is to develop a scoring system to predict HAI that was derived from Logistic Regression (LR) and validated by Artificial Neural Networks (ANN) simultaneously. METHODOLOGY/PRINCIPAL FINDINGS: A total of 476 patients from all the 806 HAI inpatients were included for the study between 2004 and 2005. A sample of 1,376 non-HAI inpatients was randomly drawn from all the admitted patients in the same period of time as the control group. External validation of 2,500 patients was abstracted from another academic teaching center. Sixteen variables were extracted from the Electronic Health Records (EHR) and fed into ANN and LR models. With stepwise selection, the following seven variables were identified by LR models as statistically significant: Foley catheterization, central venous catheterization, arterial line, nasogastric tube, hemodialysis, stress ulcer prophylaxes and systemic glucocorticosteroids. Both ANN and LR models displayed excellent discrimination (area under the receiver operating characteristic curve [AUC]: 0.964 versus 0.969, p = 0.507) to identify infection in internal validation. During external validation, high AUC was obtained from both models (AUC: 0.850 versus 0.870, p = 0.447). The scoring system also performed extremely well in the internal (AUC: 0.965) and external (AUC: 0.871) validations. CONCLUSIONS: We developed a scoring system to predict HAI with simple parameters validated with ANN and LR models. Armed with this scoring system, infectious disease specialists can more efficiently identify patients at high risk for HAI during hospitalization. Further, using parameters either by observation of medical devices used or data obtained from EHR also provided good prediction outcome that can be utilized in different clinical settings

    Cardiovascular magnetic resonance in pericardial diseases

    Get PDF
    The pericardium and pericardial diseases in particular have received, in contrast to other topics in the field of cardiology, relatively limited interest. Today, despite improved knowledge of pathophysiology of pericardial diseases and the availability of a wide spectrum of diagnostic tools, the diagnostic challenge remains. Not only the clinical presentation may be atypical, mimicking other cardiac, pulmonary or pleural diseases; in developed countries a shift for instance in the epidemiology of constrictive pericarditis has been noted. Accurate decision making is crucial taking into account the significant morbidity and mortality caused by complicated pericardial diseases, and the potential benefit of therapeutic interventions. Imaging herein has an important role, and cardiovascular magnetic resonance (CMR) is definitely one of the most versatile modalities to study the pericardium. It fuses excellent anatomic detail and tissue characterization with accurate evaluation of cardiac function and assessment of the haemodynamic consequences of pericardial constraint on cardiac filling. This review focuses on the current state of knowledge how CMR can be used to study the most common pericardial diseases

    A gene frequency model for QTL mapping using Bayesian inference

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Information for mapping of quantitative trait loci (QTL) comes from two sources: linkage disequilibrium (non-random association of allele states) and cosegregation (non-random association of allele origin). Information from LD can be captured by modeling conditional means and variances at the QTL given marker information. Similarly, information from cosegregation can be captured by modeling conditional covariances. Here, we consider a Bayesian model based on gene frequency (BGF) where both conditional means and variances are modeled as a function of the conditional gene frequencies at the QTL. The parameters in this model include these gene frequencies, additive effect of the QTL, its location, and the residual variance. Bayesian methodology was used to estimate these parameters. The priors used were: logit-normal for gene frequencies, normal for the additive effect, uniform for location, and inverse chi-square for the residual variance. Computer simulation was used to compare the power to detect and accuracy to map QTL by this method with those from least squares analysis using a regression model (LSR).</p> <p>Results</p> <p>To simplify the analysis, data from unrelated individuals in a purebred population were simulated, where only LD information contributes to map the QTL. LD was simulated in a chromosomal segment of 1 cM with one QTL by random mating in a population of size 500 for 1000 generations and in a population of size 100 for 50 generations. The comparison was studied under a range of conditions, which included SNP density of 0.1, 0.05 or 0.02 cM, sample size of 500 or 1000, and phenotypic variance explained by QTL of 2 or 5%. Both 1 and 2-SNP models were considered. Power to detect the QTL for the BGF, ranged from 0.4 to 0.99, and close or equal to the power of the regression using least squares (LSR). Precision to map QTL position of BGF, quantified by the mean absolute error, ranged from 0.11 to 0.21 cM for BGF, and was better than the precision of LSR, which ranged from 0.12 to 0.25 cM.</p> <p>Conclusions</p> <p>In conclusion given a high SNP density, the gene frequency model can be used to map QTL with considerable accuracy even within a 1 cM region.</p
    corecore