1,455 research outputs found

    Technical characteristics of a solar geoengineering deployment and implications for governance

    Get PDF
    Consideration of solar geoengineering as a potential response to climate change will demand complex decisions. These include not only the choice of whether to deploy solar engineering, but decisions regarding how to deploy, and ongoing decision-making throughout deployment. Research on the governance of solar geoengineering to date has primarily engaged only with the question of whether to deploy. We examine the science of solar geoengineering in order to clarify the technical dimensions of decisions about deployment – both strategic and operational – and how these might influence governance considerations, while consciously refraining from making specific recommendations. The focus here is on a hypothetical deployment rather than governance of the research itself. We first consider the complexity surrounding the design of a deployment scheme, in particular the complicated and difficult decision of what its objective(s) would be, given that different choices for how to deploy will lead to different climate outcomes. Next, we discuss the on-going decisions across multiple timescales, from the sub-annual to the multi-decadal. For example, feedback approaches might effectively manage some uncertainties, but would require frequent adjustments to the solar geoengineering deployment in response to observations. Other decisions would be tied to the inherently slow process of detection and attribution of climate effects in the presence of natural variability. Both of these present challenges to decision-making. These considerations point toward particular governance requirements, including an important role for technical experts – with all the challenges that entails

    The epigenetic regulators CBP and p300 facilitate leukemogenesis and represent therapeutic targets in acute myeloid leukemia.

    Get PDF
    Growing evidence links abnormal epigenetic control to the development of hematological malignancies. Accordingly, inhibition of epigenetic regulators is emerging as a promising therapeutic strategy. The acetylation status of lysine residues in histone tails is one of a number of epigenetic post-translational modifications that alter DNA-templated processes, such as transcription, to facilitate malignant transformation. Although histone deacetylases are already being clinically targeted, the role of histone lysine acetyltransferases (KAT) in malignancy is less well characterized. We chose to study this question in the context of acute myeloid leukemia (AML), where, using in vitro and in vivo genetic ablation and knockdown experiments in murine models, we demonstrate a role for the epigenetic regulators CBP and p300 in the induction and maintenance of AML. Furthermore, using selective small molecule inhibitors of their lysine acetyltransferase activity, we validate CBP/p300 as therapeutic targets in vitro across a wide range of human AML subtypes. We proceed to show that growth retardation occurs through the induction of transcriptional changes that induce apoptosis and cell-cycle arrest in leukemia cells and finally demonstrate the efficacy of the KAT inhibitors in decreasing clonogenic growth of primary AML patient samples. Taken together, these data suggest that CBP/p300 are promising therapeutic targets across multiple subtypes in AML.Funding in the Huntly laboratory comes from Cancer Research UK, Leukemia Lymphoma Research, the Kay Kendal Leukemia Fund, the Leukemia lymphoma Society of America, the Wellcome Trust, The Medical Research Council and an NIHR Cambridge Biomedical Research Centre grant. Patient samples were processed in the Cambridge Blood and Stem Cell Biobank.This is the author accepted manuscript. The final version is available via NPG at http://dx.doi.org/10.1038/onc.2015.9

    Multiple Imputation Ensembles (MIE) for dealing with missing data

    Get PDF
    Missing data is a significant issue in many real-world datasets, yet there are no robust methods for dealing with it appropriately. In this paper, we propose a robust approach to dealing with missing data in classification problems: Multiple Imputation Ensembles (MIE). Our method integrates two approaches: multiple imputation and ensemble methods and compares two types of ensembles: bagging and stacking. We also propose a robust experimental set-up using 20 benchmark datasets from the UCI machine learning repository. For each dataset, we introduce increasing amounts of data Missing Completely at Random. Firstly, we use a number of single/multiple imputation methods to recover the missing values and then ensemble a number of different classifiers built on the imputed data. We assess the quality of the imputation by using dissimilarity measures. We also evaluate the MIE performance by comparing classification accuracy on the complete and imputed data. Furthermore, we use the accuracy of simple imputation as a benchmark for comparison. We find that our proposed approach combining multiple imputation with ensemble techniques outperform others, particularly as missing data increases

    Country development and manuscript selection bias: a review of published studies

    Get PDF
    BACKGROUND: Manuscript selection bias is the selective publication of manuscripts based on study characteristics other than quality indicators. One reason may be a perceived editorial bias against the researches from less-developed world. We aimed to compare the methodological quality and statistical appeal of trials from countries with different development status and to determine their association with the journal impact factors and language of publication. METHODS: Selection criteria: Based on the World Bank income criteria countries were divided into four groups. All records of clinical trials conducted in each income group during 1993 and 2003 were included if they contained abstract and study sample size. Data sources: Cochrane Controlled Trials Register was searched and 50 articles selected from each income group using a systematic random sampling method in years 1993 and 2003 separately. Data extraction: Data were extracted by two reviewers on the language of publication, use of randomization, blinding, intention to treat analysis, study sample size and statistical significance. Disagreement was dealt with by consensus. Journal impact factors were obtained from the institute for scientific information. RESULTS: Four hundred records were explored. Country income had an inverse linear association with the presence of randomization (chi2 for trend = 5.6, p = 0.02) and a direct association with the use of blinding (chi2 for trend = 6.9, p = 0.008); although in low income countries the probability of blinding was increased from 36% in 1993 to 46% in 2003. In 1993 the results of 68% of high income trials and 64.7% of other groups were statistically significant; but in 2003 they were 66% and 82% respectively. Study sample size and income were the only significant predictors of journal impact factor. CONCLUSION: The impact of country development on manuscript selection bias is considerable and may be increasing over time. It seems that one reason may be more stringent implementation of the guidelines for improving the reporting quality of trials on developing world researchers. Another reason may be the presumptions of the researchers from developing world about the editorial bias against their nationality

    The impact of albendazole treatment on the incidence of viral- and bacterial-induced diarrhea in school children in southern Vietnam: study protocol for a randomized controlled trial

    Get PDF
    Anthelmintics are one of the more commonly available classes of drugs to treat infections by parasitic helminths (especially nematodes) in the human intestinal tract. As a result of their cost-effectiveness, mass school-based deworming programs are becoming routine practice in developing countries. However, experimental and clinical evidence suggests that anthelmintic treatments may increase susceptibility to other gastrointestinal infections caused by bacteria, viruses, or protozoa. Hypothesizing that anthelmintics may increase diarrheal infections in treated children, we aim to evaluate the impact of anthelmintics on the incidence of diarrheal disease caused by viral and bacterial pathogens in school children in southern Vietnam.This is a randomized, double-blinded, placebo-controlled trial to investigate the effects of albendazole treatment versus placebo on the incidence of viral- and bacterial-induced diarrhea in 350 helminth-infected and 350 helminth-uninfected Vietnamese school children aged 6-15 years. Four hundred milligrams of albendazole, or placebo treatment will be administered once every 3 months for 12 months. At the end of 12 months, all participants will receive albendazole treatment. The primary endpoint of this study is the incidence of diarrheal disease assessed by 12 months of weekly active and passive case surveillance. Secondary endpoints include the prevalence and intensities of helminth, viral, and bacterial infections, alterations in host immunity and the gut microbiota with helminth and pathogen clearance, changes in mean z scores of body weight indices over time, and the number and severity of adverse events.In order to reduce helminth burdens, anthelmintics are being routinely administered to children in developing countries. However, the effects of anthelmintic treatment on susceptibility to other diseases, including diarrheal pathogens, remain unknown. It is important to monitor for unintended consequences of drug treatments in co-infected populations. In this trial, we will examine how anthelmintic treatment impacts host susceptibility to diarrheal infections, with the aim of informing deworming programs of any indirect effects of mass anthelmintic administrations on co-infecting enteric pathogens.ClinicalTrials.gov: NCT02597556 . Registered on 3 November 2015

    Which New Approaches to Tackling Neglected Tropical Diseases Show Promise?

    Get PDF
    This PLoS Medicine Debate examines the different approaches that can be taken to tackle neglected tropical diseases (NTDs). Some commentators, like Jerry Spiegel and colleagues from the University of British Columbia, feel there has been too much focus on the biomedical mechanisms and drug development for NTDs, at the expense of attention to the social determinants of disease. Burton Singer argues that this represents another example of the inappropriate “overmedicalization” of contemporary tropical disease control. Peter Hotez and colleagues, in contrast, argue that the best return on investment will continue to be mass drug administration for NTDs

    A cautionary note regarding count models of alcohol consumption in randomized controlled trials

    Get PDF
    BACKGROUND: Alcohol consumption is commonly used as a primary outcome in randomized alcohol treatment studies. The distribution of alcohol consumption is highly skewed, particularly in subjects with alcohol dependence. METHODS: In this paper, we will consider the use of count models for outcomes in a randomized clinical trial setting. These include the Poisson, over-dispersed Poisson, negative binomial, zero-inflated Poisson and zero-inflated negative binomial. We compare the Type-I error rate of these methods in a series of simulation studies of a randomized clinical trial, and apply the methods to the ASAP (Addressing the Spectrum of Alcohol Problems) trial. RESULTS: Standard Poisson models provide a poor fit for alcohol consumption data from our motivating example, and did not preserve Type-I error rates for the randomized group comparison when the true distribution was over-dispersed Poisson. For the ASAP trial, where the distribution of alcohol consumption featured extensive over-dispersion, there was little indication of significant randomization group differences, except when the standard Poisson model was fit. CONCLUSION: As with any analysis, it is important to choose appropriate statistical models. In simulation studies and in the motivating example, the standard Poisson was not robust when fit to over-dispersed count data, and did not maintain the appropriate Type-I error rate. To appropriately model alcohol consumption, more flexible count models should be routinely employed

    Comparison of generalized estimating equations and quadratic inference functions using data from the National Longitudinal Survey of Children and Youth (NLSCY) database

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The generalized estimating equations (GEE) technique is often used in longitudinal data modeling, where investigators are interested in population-averaged effects of covariates on responses of interest. GEE involves specifying a model relating covariates to outcomes and a plausible correlation structure between responses at different time periods. While GEE parameter estimates are consistent irrespective of the true underlying correlation structure, the method has some limitations that include challenges with model selection due to lack of absolute goodness-of-fit tests to aid comparisons among several plausible models. The quadratic inference functions (QIF) method extends the capabilities of GEE, while also addressing some GEE limitations.</p> <p>Methods</p> <p>We conducted a comparative study between GEE and QIF via an illustrative example, using data from the "National Longitudinal Survey of Children and Youth (NLSCY)" database. The NLSCY dataset consists of long-term, population based survey data collected since 1994, and is designed to evaluate the determinants of developmental outcomes in Canadian children. We modeled the relationship between hyperactivity-inattention and gender, age, family functioning, maternal depression symptoms, household income adequacy, maternal immigration status and maternal educational level using GEE and QIF. Basis for comparison include: (1) ease of model selection; (2) sensitivity of results to different working correlation matrices; and (3) efficiency of parameter estimates.</p> <p>Results</p> <p>The sample included 795, 858 respondents (50.3% male; 12% immigrant; 6% from dysfunctional families). QIF analysis reveals that gender (male) (odds ratio [OR] = 1.73; 95% confidence interval [CI] = 1.10 to 2.71), family dysfunctional (OR = 2.84, 95% CI of 1.58 to 5.11), and maternal depression (OR = 2.49, 95% CI of 1.60 to 2.60) are significantly associated with higher odds of hyperactivity-inattention. The results remained robust under GEE modeling. Model selection was facilitated in QIF using a goodness-of-fit statistic. Overall, estimates from QIF were more efficient than those from GEE using AR (1) and Exchangeable working correlation matrices (Relative efficiency = 1.1117; 1.3082 respectively).</p> <p>Conclusion</p> <p>QIF is useful for model selection and provides more efficient parameter estimates than GEE. QIF can help investigators obtain more reliable results when used in conjunction with GEE.</p

    Efficacy of Single-Dose and Triple-Dose Albendazole and Mebendazole against Soil-Transmitted Helminths and Taenia spp.: A Randomized Controlled Trial

    Get PDF
    BACKGROUND: The control of soil-transmitted helminth (STH) infections currently relies on the large-scale administration of single-dose oral albendazole or mebendazole. However, these treatment regimens have limited efficacy against hookworm and Trichuris trichiura in terms of cure rates (CR), whereas fecal egg reduction rates (ERR) are generally high for all common STH species. We compared the efficacy of single-dose versus triple-dose treatment against hookworm and other STHs in a community-based randomized controlled trial in the People's Republic of China. METHODOLOGY/PRINCIPAL FINDINGS: The hookworm CR and fecal ERR were assessed in 314 individuals aged </=5 years who submitted two stool samples before and 3-4 weeks after administration of single-dose oral albendazole (400 mg) or mebendazole (500 mg) or triple-dose albendazole (3x400 mg over 3 consecutive days) or mebendazole (3x500 mg over 3 consecutive days). Efficacy against T. trichiura, Ascaris lumbricoides, and Taenia spp. was also assessed. ALBENDAZOLE CURED SIGNIFICANTLY MORE HOOKWORM INFECTIONS THAN MEBENDAZOLE IN BOTH TREATMENT REGIMENS (SINGLE DOSE: respective CRs 69% (95% confidence interval [CI]: 55-81%) and 29% (95% CI: 20-45%); triple dose: respective CRs 92% (95% CI: 81-98%) and 54% (95% CI: 46-71%)). ERRs followed the same pattern (single dose: 97% versus 84%; triple dose: 99.7% versus 96%). Triple-dose regimens outperformed single doses against T. trichiura; three doses of mebendazole - the most efficacious treatment tested - cured 71% (95% CI: 57-82%). Both single and triple doses of either drug were highly efficacious against A. lumbricoides (CR: 93-97%; ERR: all <99.9%). Triple dose regimens cured all Taenia spp. infections, whereas single dose applications cured only half of them. CONCLUSIONS/SIGNIFICANCE: Single-dose oral albendazole is more efficacious against hookworm than mebendazole. To achieve high CRs against both hookworm and T. trichiura, triple-dose regimens are warranted. CONCLUSIONS/SIGNIFICANCE: Single-dose oral albendazole is more efficacious against hookworm than mebendazole. To achieve high CRs against both hookworm and T. trichiura, triple-dose regimens are warranted. TRIAL REGISTRATION: www.controlled-trials.comISRCTN4737502

    Dealing with Missing Data and Uncertainty in the Context of Data Mining

    Get PDF
    Missing data is an issue in many real-world datasets yet robust methods for dealing with missing data appropriately still need development. In this paper we conduct an investigation of how some methods for handling missing data perform when the uncertainty increases. Using benchmark datasets from the UCI Machine Learning repository we generate datasets for our experimentation with increasing amounts of data Missing Completely At Random (MCAR) both at the attribute level and at the record level. We then apply four classification algorithms: C4.5, Random Forest, Naïve Bayes and Support Vector Machines (SVMs). We measure the performance of each classifiers on the basis of complete case analysis, simple imputation and then we study the performance of the algorithms that can handle missing data. We find that complete case analysis has a detrimental effect because it renders many datasets infeasible when missing data increases, particularly for high dimensional data. We find that increasing missing data does have a negative effect on the performance of all the algorithms tested but the different algorithms tested either using preprocessing in the form of simple imputation or handling the missing data do not show a significant difference in performance
    corecore