1,392 research outputs found

    A meta-analysis of long-term effects of conservation agriculture on maize grain yield under rain-fed conditions

    Get PDF
    Conservation agriculture involves reduced tillage, permanent soil cover and crop rotations to enhance soil fertility and to supply food from a dwindling land resource. Recently, conservation agriculture has been promoted in Southern Africa, mainly for maize-based farming systems. However, maize yields under rain-fed conditions are often variable. There is therefore a need to identify factors that influence crop yield under conservation agriculture and rain-fed conditions. Here, we studied maize grain yield data from experiments lasting 5 years and more under rain-fed conditions. We assessed the effect of long-term tillage and residue retention on maize grain yield under contrasting soil textures, nitrogen input and climate. Yield variability was measured by stability analysis. Our results show an increase in maize yield over time with conservation agriculture practices that include rotation and high input use in low rainfall areas. But we observed no difference in system stability under those conditions. We observed a strong relationship between maize grain yield and annual rainfall. Our meta-analysis gave the following findings: (1) 92% of the data show that mulch cover in high rainfall areas leads to lower yields due to waterlogging; (2) 85% of data show that soil texture is important in the temporal development of conservation agriculture effects, improved yields are likely on well-drained soils; (3) 73% of the data show that conservation agriculture practices require high inputs especially N for improved yield; (4) 63% of data show that increased yields are obtained with rotation but calculations often do not include the variations in rainfall within and between seasons; (5) 56% of the data show that reduced tillage with no mulch cover leads to lower yields in semi-arid areas; and (6) when adequate fertiliser is available, rainfall is the most important determinant of yield in southern Africa. It is clear from our results that conservation agriculture needs to be targeted and adapted to specific biophysical conditions for improved impact

    Limits on WWZ and WW\gamma couplings from p\bar{p}\to e\nu jj X events at \sqrt{s} = 1.8 TeV

    Get PDF
    We present limits on anomalous WWZ and WW-gamma couplings from a search for WW and WZ production in p-bar p collisions at sqrt(s)=1.8 TeV. We use p-bar p -> e-nu jjX events recorded with the D0 detector at the Fermilab Tevatron Collider during the 1992-1995 run. The data sample corresponds to an integrated luminosity of 96.0+-5.1 pb^(-1). Assuming identical WWZ and WW-gamma coupling parameters, the 95% CL limits on the CP-conserving couplings are -0.33<lambda<0.36 (Delta-kappa=0) and -0.43<Delta-kappa<0.59 (lambda=0), for a form factor scale Lambda = 2.0 TeV. Limits based on other assumptions are also presented.Comment: 11 pages, 2 figures, 2 table

    Search For Heavy Pointlike Dirac Monopoles

    Get PDF
    We have searched for central production of a pair of photons with high transverse energies in ppΛ‰p\bar p collisions at s=1.8\sqrt{s} = 1.8 TeV using 70pbβˆ’170 pb^{-1} of data collected with the D\O detector at the Fermilab Tevatron in 1994--1996. If they exist, virtual heavy pointlike Dirac monopoles could rescatter pairs of nearly real photons into this final state via a box diagram. We observe no excess of events above background, and set lower 95% C.L. limits of 610,870,or1580GeV/c2610, 870, or 1580 GeV/c^2 on the mass of a spin 0, 1/2, or 1 Dirac monopole.Comment: 12 pages, 4 figure

    Measurement of the top quark mass using the matrix element technique in dilepton final states

    Get PDF
    We present a measurement of the top quark mass in ppΒ― collisions at a center-of-mass energy of 1.96 TeV at the Fermilab Tevatron collider. The data were collected by the D0 experiment corresponding to an integrated luminosity of 9.7  fbβˆ’1. The matrix element technique is applied to ttΒ― events in the final state containing leptons (electrons or muons) with high transverse momenta and at least two jets. The calibration of the jet energy scale determined in the lepton+jets final state of ttΒ― decays is applied to jet energies. This correction provides a substantial reduction in systematic uncertainties. We obtain a top quark mass of mt=173.93Β±1.84  GeV

    Zgamma Production in pbarp Collisions at sqrt(s)=1.8 TeV and Limits on Anomalous ZZgamma and Zgammagamma Couplings

    Full text link
    We present a study of Z +gamma + X production in p-bar p collisions at sqrt{S}=1.8 TeV from 97 (87) pb^{-1} of data collected in the eegamma (mumugamma) decay channel with the D0 detector at Fermilab. The event yield and kinematic characteristics are consistent with the Standard Model predictions. We obtain limits on anomalous ZZgamma and Zgammagamma couplings for form factor scales Lambda = 500 GeV and Lambda = 750 GeV. Combining this analysis with our previous results yields 95% CL limits |h{Z}_{30}| < 0.36, |h{Z}_{40}| < 0.05, |h{gamma}_{30}| < 0.37, and |h{gamma}_{40}| < 0.05 for a form factor scale Lambda=750 GeV.Comment: 17 Pages including 2 Figures. Submitted to PR

    A Measurement of the W Boson Mass

    Full text link
    We report a measurement of the W boson mass based on an integrated luminosity of 82 pbβˆ’1^{-1} from \ppbar collisions at s=1.8\sqrt{s}=1.8 TeV recorded in 1994--1995 by the \Dzero detector at the Fermilab Tevatron. We identify W bosons by their decays to eΞ½e\nu and extract the mass by fitting the transverse mass spectrum from 28,323 W boson candidates. A sample of 3,563 dielectron events, mostly due to Z to ee decays, constrains models of W boson production and the detector. We measure \mw=80.44\pm0.10(stat)\pm0.07(syst)~GeV. By combining this measurement with our result from the 1992--1993 data set, we obtain \mw=80.43\pm0.11 GeV.Comment: 11 pages, 5 figure

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty

    The removal of multiplicative, systematic bias allows integration of breast cancer gene expression datasets – improving meta-analysis and prediction of prognosis

    Get PDF
    BACKGROUND: The number of gene expression studies in the public domain is rapidly increasing, representing a highly valuable resource. However, dataset-specific bias precludes meta-analysis at the raw transcript level, even when the RNA is from comparable sources and has been processed on the same microarray platform using similar protocols. Here, we demonstrate, using Affymetrix data, that much of this bias can be removed, allowing multiple datasets to be legitimately combined for meaningful meta-analyses. RESULTS: A series of validation datasets comparing breast cancer and normal breast cell lines (MCF7 and MCF10A) were generated to examine the variability between datasets generated using different amounts of starting RNA, alternative protocols, different generations of Affymetrix GeneChip or scanning hardware. We demonstrate that systematic, multiplicative biases are introduced at the RNA, hybridization and image-capture stages of a microarray experiment. Simple batch mean-centering was found to significantly reduce the level of inter-experimental variation, allowing raw transcript levels to be compared across datasets with confidence. By accounting for dataset-specific bias, we were able to assemble the largest gene expression dataset of primary breast tumours to-date (1107), from six previously published studies. Using this meta-dataset, we demonstrate that combining greater numbers of datasets or tumours leads to a greater overlap in differentially expressed genes and more accurate prognostic predictions. However, this is highly dependent upon the composition of the datasets and patient characteristics. CONCLUSION: Multiplicative, systematic biases are introduced at many stages of microarray experiments. When these are reconciled, raw data can be directly integrated from different gene expression datasets leading to new biological findings with increased statistical power

    Jet energy measurement with the ATLAS detector in proton-proton collisions at root s=7 TeV

    Get PDF
    The jet energy scale and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of √s = 7TeV corresponding to an integrated luminosity of 38 pb-1. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0. 4 or R=0. 6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pTβ‰₯20 GeV and pseudorapidities {pipe}Ξ·{pipe}<4. 5. The jet energy systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams, exploiting the transverse momentum balance between central and forward jets in events with dijet topologies and studying systematic variations in Monte Carlo simulations. The jet energy uncertainty is less than 2. 5 % in the central calorimeter region ({pipe}Ξ·{pipe}<0. 8) for jets with 60≀pT<800 GeV, and is maximally 14 % for pT<30 GeV in the most forward region 3. 2≀{pipe}Ξ·{pipe}<4. 5. The jet energy is validated for jet transverse momenta up to 1 TeV to the level of a few percent using several in situ techniques by comparing a well-known reference such as the recoiling photon pT, the sum of the transverse momenta of tracks associated to the jet, or a system of low-pT jets recoiling against a high-pT jet. More sophisticated jet calibration schemes are presented based on calorimeter cell energy density weighting or hadronic properties of jets, aiming for an improved jet energy resolution and a reduced flavour dependence of the jet response. The systematic uncertainty of the jet energy determined from a combination of in situ techniques is consistent with the one derived from single hadron response measurements over a wide kinematic range. The nominal corrections and uncertainties are derived for isolated jets in an inclusive sample of high-pT jets. Special cases such as event topologies with close-by jets, or selections of samples with an enhanced content of jets originating from light quarks, heavy quarks or gluons are also discussed and the corresponding uncertainties are determined. Β© 2013 CERN for the benefit of the ATLAS collaboration
    • …
    corecore