403 research outputs found

    Historical costs of coal-fired electricity and implications for the future

    Get PDF
    We study the cost of coal-fired electricity in the United States between 1882 and 2006 by decomposing it in terms of the price of coal, transportation cost, energy density, thermal efficiency, plant construction cost, interest rate, capacity factor, and operations and maintenance cost. The dominant determinants of cost have been the price of coal and plant construction cost. The price of coal appears to fluctuate more or less randomly while the construction cost follows long-term trends, decreasing from 1902 to 1970, increasing from 1970 to 1990, and leveling off since then. Our analysis emphasizes the importance of using long time series and comparing electricity generation technologies using decomposed total costs, rather than costs of single components like capital. By taking this approach we find that the history of coal-fired electricity suggests there is a fluctuating floor to its future costs, which is determined by coal prices. Even if construction costs resumed a decreasing trend, the cost of coal-based electricity would drop for a while but eventually be determined by the price of coal, which fluctuates while showing no long-term trend.National Science Foundation (U.S.) (NSF Grant SBE0738187

    Network Properties of Economic Input-Output Networks

    Get PDF
    This paper investigates applications of network analysis to national input-output tables. This includes initial steps to become familiar with sources for input-output data and the assumptions that go into their compilation; traditional input-output analysis; ecological input-output metrics; the difficulties in the analysis of weighted, directed graphs; the overall structure of economic input-output networks; and possible bases for comparison of network metrics. Both quantitative and qualitative regularities were observed across the OECD economies. Specifically, flow sizes and industry sizes appear to follow the same distribution for all OECD countries; the overall structure of flows within the network, as characterized by the relative amount of cycled and first-passage flow, followed a similar pattern for most OECD countries; and similar groups of closely connected sectors were found. More work needs to be done to understand these results in depth. Directions for future research are outlined; in particular, exploring (1) the stability of these results to IO data with different levels of detail, (2) community structure within the IO networks, and (3) generative/dynamic models of IO networks

    How production networks amplify economic growth

    Get PDF
    Technological improvement is the most important cause of long-term economic growth. We study the effects of technology improvement in the setting of a production network, in which each producer buys input goods and converts them to other goods, selling the product to households or other producers. We show how this network amplifies the effects of technological improvements as they propagate along chains of production. Longer production chains for an industry bias it towards faster price reduction, and longer production chains for a country bias it towards faster GDP growth. These predictions are in good agreement with data and improve with the passage of time, demonstrating a key influence of production chains in price change and output growth over the long term

    Safe food and feed through an integrated toolbox for mycotoxin management: the MyToolBox approach

    Get PDF
    There is a pressing need to mobilise the wealth of knowledge from the international mycotoxin research conductedover the past 25-30 years, and to perform cutting-edge research where knowledge gaps still exist. This knowledgeneeds to be integrated into affordable and practical tools for farmers and food processors along the chain inorder to reduce the risk of mycotoxin contamination of crops, feed and food. This is the mission of MyToolBox – a four-year project which has received funding from the European Commission. It mobilises a multi-actorpartnership (academia, farmers, technology small and medium sized enterprises, food industry and policystakeholders) to develop novel interventions aimed at achieving a significant reduction in crop losses due tomycotoxin contamination. Besides a field-to-fork approach, MyToolBox also considers safe use options ofcontaminated batches, such as the efficient production of biofuels. Compared to previous efforts of mycotoxin reduction strategies, the distinguishing feature of MyToolBox is to provide the recommended measures to theend users along the food and feed chain in a web-based MyToolBox platform (e-toolbox). The project focuseson small grain cereals, maize, peanuts and dried figs, applicable to agricultural conditions in the EU and China. Crop losses using existing practices are being compared with crop losses after novel pre-harvest interventionsincluding investigation of genetic resistance to fungal infection, cultural control (e.g. minimum tillage or cropdebris treatment), the use of novel biopesticides suitable for organic farming, competitive biocontrol treatment and development of novel modelling approaches to predict mycotoxin contamination. Research into post-harvestmeasures includes real-time monitoring during storage, innovative sorting of crops using vision-technology, novelmilling technology and studying the effects of baking on mycotoxins at an industrial scale

    Machine learning predicts accurately mycobacterium tuberculosis drug resistance from whole genome sequencing data

    Get PDF
    Background: Tuberculosis disease, caused by Mycobacterium tuberculosis, is a major public health problem. The emergence of M. tuberculosis strains resistant to existing treatments threatens to derail control efforts. Resistance is mainly conferred by mutations in genes coding for drug targets or converting enzymes, but our knowledge of these mutations is incomplete. Whole genome sequencing (WGS) is an increasingly common approach to rapidly characterize isolates and identify mutations predicting antimicrobial resistance and thereby providing a diagnostic tool to assist clinical decision making. Methods: We applied machine learning approaches to 16,688 M. tuberculosis isolates that have undergone WGS and laboratory drug-susceptibility testing (DST) across 14 antituberculosis drugs, with 22.5% of samples being multidrug resistant and 2.1% being extensively drug resistant. We used non-parametric classification-tree and gradientboosted-tree models to predict drug resistance and uncover any associated novel putative mutations. We fitted separate models for each drug, with and without “co-occurrent resistance” markers known to be causing resistance to drugs other than the one of interest. Predictive performance was measured using sensitivity, specificity, and the area under the receiver operating characteristic curve, assuming DST results as the gold standard. Results: The predictive performance was highest for resistance to first-line drugs, amikacin, kanamycin, ciprofloxacin, moxifloxacin, and multidrug-resistant tuberculosis (area under the receiver operating characteristic curve above 96%), and lowest for thirdline drugs such as D-cycloserine and Para-aminosalisylic acid (area under the curve below 85%). The inclusion of co-occurrent resistance markers led to improved performance for some drugs and superior results when compared to similar models in other largescale studies, which had smaller sample sizes. Overall, the gradient-boosted-tree models performed better than the classification-tree models. The mutation-rank analysis detected no new single nucleotide polymorphisms linked to drug resistance. Discordance between DST and genotypically inferred resistance may be explained by DST errors, novel rare mutations, hetero-resistance, and nongenomic drivers such as efflux-pump upregulation. Conclusion: Our work demonstrates the utility of machine learning as a flexible approach to drug resistance prediction that is able to accommodate a much larger number of predictors and to summarize their predictive ability, thus assisting clinical decision making and single nucleotide polymorphism detection in an era of increasing WGS data generation

    Statistical Basis for Predicting Technological Progress

    Get PDF
    Forecasting technological progress is of great interest to engineers, policy makers, and private investors. Several models have been proposed for predicting technological improvement, but how well do these models perform? An early hypothesis made by Theodore Wright in 1936 is that cost decreases as a power law of cumulative production. An alternative hypothesis is Moore's law, which can be generalized to say that technologies improve exponentially with time. Other alternatives were proposed by Goddard, Sinclair et al., and Nordhaus. These hypotheses have not previously been rigorously tested. Using a new database on the cost and production of 62 different technologies, which is the most expansive of its kind, we test the ability of six different postulated laws to predict future costs. Our approach involves hindcasting and developing a statistical model to rank the performance of the postulated laws. Wright's law produces the best forecasts, but Moore's law is not far behind. We discover a previously unobserved regularity that production tends to increase exponentially. A combination of an exponential decrease in cost and an exponential increase in production would make Moore's law and Wright's law indistinguishable, as originally pointed out by Sahal. We show for the first time that these regularities are observed in data to such a degree that the performance of these two laws is nearly tied. Our results show that technological progress is forecastable, with the square root of the logarithmic error growing linearly with the forecasting horizon at a typical rate of 2.5% per year. These results have implications for theories of technological change, and assessments of candidate technologies and policies for climate change mitigation

    Dominant Role of Oncogene Dosage and Absence of Tumor Suppressor Activity in Nras-Driven Hematopoietic Transformation

    Get PDF
    Biochemical properties of Ras oncoproteins and their transforming ability strongly support a dominant mechanism of action in tumorigenesis. However, genetic studies unexpectedly suggested that wild-type (WT) Ras exerts tumor suppressor activity. Expressing oncogenic Nras[superscript G12D] in the hematopoietic compartment of mice induces an aggressive myeloproliferative neoplasm that is exacerbated in homozygous mutant animals. Here, we show that increased Nras[superscript G12D] gene dosage, but not inactivation of WT Nras, underlies the aggressive in vivo behavior of Nras[superscript G12D over G12D] hematopoietic cells. Modulating Nras[superscript G12D] dosage had discrete effects on myeloid progenitor growth, signal transduction, and sensitivity to MAP-ERK kinase (MEK) inhibition. Furthermore, enforced WT N-Ras expression neither suppressed the growth of Nras-mutant cells nor inhibited myeloid transformation by exogenous Nras[superscript G12D]. Importantly, NRAS expression increased in human cancer cell lines with NRAS mutations. These data have therapeutic implications and support reconsidering the proposed tumor suppressor activity of WT Ras in other cancers.Pfizer Inc. (PD0325901)National Institutes of Health (U.S.) (Grant R37CA72614)National Institutes of Health (U.S.) (Grant P01CA40046)National Institutes of Health (U.S.) (Grant K08CA134649)Leukemia & Lymphoma Society of America (Specialized Center of Research Award LLS 7019-04))American Lebanese Syrian Associated Charitie

    Bacteriophage- based tests for the detection of Mycobacterium tuberculosis in clinical specimens: a systematic review and meta- analysis

    Get PDF
    BACKGROUND: Sputum microscopy, the most important conventional test for tuberculosis, is specific in settings with high burden of tuberculosis and low prevalence of non tuberculous mycobacteria. However, the test lacks sensitivity. Although bacteriophage-based tests for tuberculosis have shown promising results, their overall accuracy has not been systematically evaluated. METHODS: We did a systematic review and meta-analysis of published studies to evaluate the accuracy of phage-based tests for the direct detection of M. tuberculosis in clinical specimens. To identify studies, we searched Medline, EMBASE, Web of science and BIOSIS, and contacted authors, experts and test manufacturers. Thirteen studies, all based on phage amplification method, met our inclusion criteria. Overall accuracy was evaluated using forest plots, summary receiver operating (SROC) curves, and subgroup analyses. RESULTS: The data suggest that phage-based assays have high specificity (range 0.83 to 1.00), but modest and variable sensitivity (range 0.21 to 0.88). The sensitivity ranged between 0.29 and 0.87 among smear-positive, and 0.13 to 0.78 among smear-negative specimens. The specificity ranged between 0.60 and 0.88 among smear-positive and 0.89 to 0.99 among smear-negative specimens. SROC analyses suggest that overall accuracy of phage-based assays is slightly higher than smear microscopy in direct head-to-head comparisons. CONCLUSION: Phage-based assays have high specificity but lower and variable sensitivity. Their performance characteristics are similar to sputum microscopy. Phage assays cannot replace conventional diagnostic tests such as microscopy and culture at this time. Further research is required to identify methods that can enhance the sensitivity of phage-based assays without compromising the high specificity

    Whole Genome Sequencing Shows a Low Proportion of Tuberculosis Disease Is Attributable to Known Close Contacts in Rural Malawi.

    Get PDF
    BACKGROUND: The proportion of tuberculosis attributable to transmission from close contacts is not well known. Comparison of the genome of strains from index patients and prior contacts allows transmission to be confirmed or excluded. METHODS: In Karonga District, Malawi, all tuberculosis patients are asked about prior contact with others with tuberculosis. All available strains from culture-positive patients were sequenced. Up to 10 single nucleotide polymorphisms between index patients and their prior contacts were allowed for confirmation, and ≥ 100 for exclusion. The population attributable fraction was estimated from the proportion of confirmed transmissions and the proportion of patients with contacts. RESULTS: From 1997-2010 there were 1907 new culture-confirmed tuberculosis patients, of whom 32% reported at least one family contact and an additional 11% had at least one other contact; 60% of contacts had smear-positive disease. Among case-contact pairs with sequences available, transmission was confirmed from 38% (62/163) smear-positive prior contacts and 0/17 smear-negative prior contacts. Confirmed transmission was more common in those related to the prior contact (42.4%, 56/132) than in non-relatives (19.4%, 6/31, p = 0.02), and in those with more intense contact, to younger index cases, and in more recent years. The proportion of tuberculosis attributable to known contacts was estimated to be 9.4% overall. CONCLUSIONS: In this population known contacts only explained a small proportion of tuberculosis cases. Even those with a prior family contact with smear positive tuberculosis were more likely to have acquired their infection elsewhere
    • …
    corecore