332 research outputs found

    A review of physical supply and EROI of fossil fuels in China

    Get PDF
    This paper reviews China’s future fossil fuel supply from the perspectives of physical output and net energy output. Comprehensive analyses of physical output of fossil fuels suggest that China’s total oil production will likely reach its peak, at about 230 Mt/year (or 9.6 EJ/year), in 2018; its total gas production will peak at around 350 Bcm/year (or 13.6 EJ/year) in 2040, while coal production will peak at about 4400 Mt/year (or 91.9 EJ/year) around 2020 or so. In terms of the forecast production of these fuels, there are significant differences among current studies. These differences can be mainly explained by different ultimately recoverable resources assumptions, the nature of the models used, and differences in the historical production data. Due to the future constraints on fossil fuels production, a large gap is projected to grow between domestic supply and demand, which will need to be met by increasing imports. Net energy analyses show that both coal and oil and gas production show a steady declining trend of EROI (energy return on investment) due to the depletion of shallow-buried coal resources and conventional oil and gas resources, which is generally consistent with the approaching peaks of physical production of fossil fuels. The peaks of fossil fuels production, coupled with the decline in EROI ratios, are likely to challenge the sustainable development of Chinese society unless new abundant energy resources with high EROI values can be found

    Absolute estimation of initial concentrations of amplicon in a real-time RT-PCR process

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Since real time PCR was first developed, several approaches to estimating the initial quantity of template in an RT-PCR reaction have been tried. While initially only the early thermal cycles corresponding to exponential duplication were used, lately there has been an effort to use all of the cycles in a PCR. The efforts have included both fitting empirical sigmoid curves and more elaborate mechanistic models that explore the chemical reactions taking place during each cycle. The more elaborate mechanistic models require many more parameters than can be fit from a single amplification, while the empirical models provide little insight and are difficult to tailor to specific reactants.</p> <p>Results</p> <p>We directly estimate the initial amount of amplicon using a simplified mechanistic model based on chemical reactions in the annealing step of the PCR. The basic model includes the duplication of DNA with the digestion of Taqman probe and the re-annealing between previously synthesized DNA strands of opposite orientation. By modelling the amount of Taqman probe digested and matching that with the observed fluorescence, the conversion factor between the number of fluorescing dye molecules and observed fluorescent emission can be estimated, along with the absolute initial amount of amplicon and the rate parameter for re-annealing. The model is applied to several PCR reactions with known amounts of amplicon and is shown to work reasonably well. An expanded version of the model allows duplication of amplicon without release of fluorescent dye, by adding 1 more parameter to the model. The additional process is helpful in most cases where the initial primer concentration exceeds the initial probe concentration. Software for applying the algorithm to data may be downloaded at <url>http://www.niehs.nih.gov/research/resources/software/pcranalyzer/</url></p> <p>Conclusion</p> <p>We present proof of the principle that a mechanistically based model can be fit to observations from a single PCR amplification. Initial amounts of amplicon are well estimated without using a standard solution. Using the ratio of the predicted initial amounts of amplicon from 2 PCRs is shown to work well even when the absolute amounts of amplicon are underestimated in the individual PCRs.</p

    A Mechanistic Model of PCR for Accurate Quantification of Quantitative PCR Data

    Get PDF
    Background: Quantitative PCR (qPCR) is a workhorse laboratory technique for measuring the concentration of a target DNA sequence with high accuracy over a wide dynamic range. The gold standard method for estimating DNA concentrations via qPCR is quantification cycle (Cq) standard curve quantification, which requires the time- and labor-intensive construction of a Cq standard curve. In theory, the shape of a qPCR data curve can be used to directly quantify DNA concentration by fitting a model to data; however, current empirical model-based quantification methods are not as reliable as Cq standard curve quantification. Principal Findings: We have developed a two-parameter mass action kinetic model of PCR (MAK2) that can be fitted to qPCR data in order to quantify target concentration from a single qPCR assay. To compare the accuracy of MAK2-fitting to other qPCR quantification methods, we have applied quantification methods to qPCR dilution series data generated in three independent laboratories using different target sequences. Quantification accuracy was assessed by analyzing the reliability of concentration predictions for targets at known concentrations. Our results indicate that quantification by MAK2-fitting is as reliable as Cq standard curve quantification for a variety of DNA targets and a wide range of concentrations. Significance: We anticipate that MAK2 quantification will have a profound effect on the way qPCR experiments are designed and analyzed. In particular, MAK2 enables accurate quantification of portable qPCR assays with limited sampl

    Pain assessment for people with dementia: a systematic review of systematic reviews of pain assessment tools.

    Get PDF
    BACKGROUND: There is evidence of under-detection and poor management of pain in patients with dementia, in both long-term and acute care. Accurate assessment of pain in people with dementia is challenging and pain assessment tools have received considerable attention over the years, with an increasing number of tools made available. Systematic reviews on the evidence of their validity and utility mostly compare different sets of tools. This review of systematic reviews analyses and summarises evidence concerning the psychometric properties and clinical utility of pain assessment tools in adults with dementia or cognitive impairment. METHODS: We searched for systematic reviews of pain assessment tools providing evidence of reliability, validity and clinical utility. Two reviewers independently assessed each review and extracted data from them, with a third reviewer mediating when consensus was not reached. Analysis of the data was carried out collaboratively. The reviews were synthesised using a narrative synthesis approach. RESULTS: We retrieved 441 potentially eligible reviews, 23 met the criteria for inclusion and 8 provided data for extraction. Each review evaluated between 8 and 13 tools, in aggregate providing evidence on a total of 28 tools. The quality of the reviews varied and the reporting often lacked sufficient methodological detail for quality assessment. The 28 tools appear to have been studied in a variety of settings and with varied types of patients. The reviews identified several methodological limitations across the original studies. The lack of a 'gold standard' significantly hinders the evaluation of tools' validity. Most importantly, the samples were small providing limited evidence for use of any of the tools across settings or populations. CONCLUSIONS: There are a considerable number of pain assessment tools available for use with the elderly cognitive impaired population. However there is limited evidence about their reliability, validity and clinical utility. On the basis of this review no one tool can be recommended given the existing evidence

    Is there a role for expectation maximization imputation in addressing missing data in research using WOMAC questionnaire? Comparison to the standard mean approach and a tutorial

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Standard mean imputation for missing values in the Western Ontario and Mc Master (WOMAC) Osteoarthritis Index limits the use of collected data and may lead to bias. Probability model-based imputation methods overcome such limitations but were never before applied to the WOMAC. In this study, we compare imputation results for the Expectation Maximization method (EM) and the mean imputation method for WOMAC in a cohort of total hip replacement patients.</p> <p>Methods</p> <p>WOMAC data on a consecutive cohort of 2062 patients scheduled for surgery were analyzed. Rates of missing values in each of the WOMAC items from this large cohort were used to create missing patterns in the subset of patients with complete data. EM and the WOMAC's method of imputation are then applied to fill the missing values. Summary score statistics for both methods are then described through box-plot and contrasted with the complete case (CC) analysis and the true score (TS). This process is repeated using a smaller sample size of 200 randomly drawn patients with higher missing rate (5 times the rates of missing values observed in the 2062 patients capped at 45%).</p> <p>Results</p> <p>Rate of missing values per item ranged from 2.9% to 14.5% and 1339 patients had complete data. Probability model-based EM imputed a score for all subjects while WOMAC's imputation method did not. Mean subscale scores were very similar for both imputation methods and were similar to the true score; however, the EM method results were more consistent with the TS after simulation. This difference became more pronounced as the number of items in a subscale increased and the sample size decreased.</p> <p>Conclusions</p> <p>The EM method provides a better alternative to the WOMAC imputation method. The EM method is more accurate and imputes data to create a complete data set. These features are very valuable for patient-reported outcomes research in which resources are limited and the WOMAC score is used in a multivariate analysis.</p

    Radio Emission from Ultra-Cool Dwarfs

    Full text link
    The 2001 discovery of radio emission from ultra-cool dwarfs (UCDs), the very low-mass stars and brown dwarfs with spectral types of ~M7 and later, revealed that these objects can generate and dissipate powerful magnetic fields. Radio observations provide unparalleled insight into UCD magnetism: detections extend to brown dwarfs with temperatures <1000 K, where no other observational probes are effective. The data reveal that UCDs can generate strong (kG) fields, sometimes with a stable dipolar structure; that they can produce and retain nonthermal plasmas with electron acceleration extending to MeV energies; and that they can drive auroral current systems resulting in significant atmospheric energy deposition and powerful, coherent radio bursts. Still to be understood are the underlying dynamo processes, the precise means by which particles are accelerated around these objects, the observed diversity of magnetic phenomenologies, and how all of these factors change as the mass of the central object approaches that of Jupiter. The answers to these questions are doubly important because UCDs are both potential exoplanet hosts, as in the TRAPPIST-1 system, and analogues of extrasolar giant planets themselves.Comment: 19 pages; submitted chapter to the Handbook of Exoplanets, eds. Hans J. Deeg and Juan Antonio Belmonte (Springer-Verlag

    A new real-time PCR method to overcome significant quantitative inaccuracy due to slight amplification inhibition

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Real-time PCR analysis is a sensitive DNA quantification technique that has recently gained considerable attention in biotechnology, microbiology and molecular diagnostics. Although, the cycle-threshold (<it>Ct</it>) method is the present "gold standard", it is far from being a standard assay. Uniform reaction efficiency among samples is the most important assumption of this method. Nevertheless, some authors have reported that it may not be correct and a slight PCR efficiency decrease of about 4% could result in an error of up to 400% using the <it>Ct </it>method. This reaction efficiency decrease may be caused by inhibiting agents used during nucleic acid extraction or copurified from the biological sample.</p> <p>We propose a new method (<it>Cy</it><sub><it>0</it></sub>) that does not require the assumption of equal reaction efficiency between unknowns and standard curve.</p> <p>Results</p> <p>The <it>Cy</it><sub><it>0 </it></sub>method is based on the fit of Richards' equation to real-time PCR data by nonlinear regression in order to obtain the best fit estimators of reaction parameters. Subsequently, these parameters were used to calculate the <it>Cy</it><sub><it>0 </it></sub>value that minimizes the dependence of its value on PCR kinetic.</p> <p>The <it>Ct</it>, second derivative (<it>Cp</it>), sigmoidal curve fitting method (<it>SCF</it>) and <it>Cy</it><sub><it>0 </it></sub>methods were compared using two criteria: precision and accuracy. Our results demonstrated that, in optimal amplification conditions, these four methods are equally precise and accurate. However, when PCR efficiency was slightly decreased, diluting amplification mix quantity or adding a biological inhibitor such as IgG, the <it>SCF</it>, <it>Ct </it>and <it>Cp </it>methods were markedly impaired while the <it>Cy</it><sub><it>0 </it></sub>method gave significantly more accurate and precise results.</p> <p>Conclusion</p> <p>Our results demonstrate that <it>Cy</it><sub><it>0 </it></sub>represents a significant improvement over the standard methods for obtaining a reliable and precise nucleic acid quantification even in sub-optimal amplification conditions overcoming the underestimation caused by the presence of some PCR inhibitors.</p
    corecore