2,245 research outputs found
The use of randomisation-based efficacy estimators in non-inferiority trials
Background
In a non-inferiority (NI) trial, analysis based on the intention-to-treat (ITT) principle is anti-conservative, so current guidelines recommend analysing on a per-protocol (PP) population in addition. However, PP analysis relies on the often implausible assumption of no confounders. Randomisation-based efficacy estimators (RBEEs) allow for treatment non-adherence while maintaining a comparison of randomised groups. Fischer et al. have developed an approach for estimating RBEEs in randomised trials with two active treatments, a common feature of NI trials. The aim of this paper was to demonstrate the use of RBEEs in NI trials using this approach, and to appraise the feasibility of these estimators as the primary analysis in NI trials.
Methods
Two NI trials were used. One comparing two different dosing regimens for the maintenance of remission in people with ulcerative colitis (CODA), and the other comparing an orally administered treatment to an intravenously administered treatment in preventing skeletal-related events in patients with bone metastases from breast cancer (ZICE). Variables that predicted adherence in each of the trial arms, and were also independent of outcome, were sought in each of the studies. Structural mean models (SMMs) were fitted that conditioned on these variables, and the point estimates and confidence intervals compared to that found in the corresponding ITT and PP analyses.
Results
In the CODA study, no variables were found that differentially predicted treatment adherence while remaining independent of outcome. The SMM, using standard methodology, moved the point estimate closer to 0 (no difference between arms) compared to the ITT and PP analyses, but the confidence interval was still within the NI margin, indicating that the conclusions drawn would remain the same. In the ZICE study, cognitive functioning as measured by the corresponding domain of the QLQ-C30, and use of chemotherapy at baseline were both differentially associated with adherence while remaining independent of outcome. However, while the SMM again moved the point estimate closer to 0, the confidence interval was wide, overlapping with any NI margin that could be justified.
Conclusion
Deriving RBEEs in NI trials with two active treatments can provide a randomisation-respecting estimate of treatment efficacy that accounts for treatment adherence, is straightforward to implement, but requires thorough planning during the design stage of the study to ensure that strong baseline predictors of treatment are captured. Extension of the approach to handle nonlinear outcome variables is also required.
Trial registration
The CODA study: ClinicalTrials.gov, identifier: NCT00708656. Registered on 8 April 2008. The ZICE study trial: ClinicalTrials.gov, identifier: NCT00326820. Registered on 16 May 2006
Challenges In The Simultaneous Development And Deployment Of A Large Integrated Modelling System
Many of our natural resource management issues cannot be adequately informed by a single discipline or sub-discipline, and require an integration of information from multiple natural and human systems. As we are unable to observe and monitor more than a few important indicators there is a strong reliance on supplementing observed information with modelled information. Following a period of record drought in the 1990’s, the Australian government recognised the need for better quality, more integrated, and nationally consistent water information. The Australian Water Resources Assessment system (AWRA) is an integrated hydrological modelling system developed by CSIRO and Australian Bureau of Meteorology (the Bureau) as part of the Water Information Research and Development Alliance (WIRADA) to support the development of two new water information products produced by the Bureau. This paper outlines the informatics, systems implementation and integration challenges in the development and deployment of the proto-operational AWRA system. A key challenge of model integration is how you access and repurpose data, how you reconcile semantic differences between both models and disparate input data sources, how you translate terms when passing between often conceptually different modelling components and how you ensure consistent identity between real world objects. The rapid development of AWRA and simultaneous transfer to an operational environment also raised many additional challenges, such as supporting multiple technologies and differing development rates of each model component, while still maintaining a working system. Additionally the continentally sized model extent, combined with techniques relatively new to the hydrologic domain, such as data assimilation and continental calibration, have introduced significant computational overheads. While an in-house fit for purpose operational build of AWRA is currently under development within the Bureau, the research challenges undertaken early in AWRA’s development still hold many valuable lessons. We have found that the use of file standards such as NetCDF, services-based modelling, and scientific workflow technologies such as ‘The WorkBench’ combined with strong model governance has mostly reduced the burden of system development and deployment and exposes some important lessons for future integrated modelling and systems integration efforts
Assessing the pathogenicity of insertion and deletion variants with the Variant Effect Scoring Tool (VEST-Indel)
Insertion/deletion variants (indels) alter protein sequence and length, yet are highly prevalent in healthy populations, presenting a challenge to bioinformatics classifiers. Commonly used features—DNA and protein sequence conservation, indel length, and occurrence in repeat regions—are useful for inference of protein damage. However, these features can cause false positives when predicting the impact of indels on disease. Existing methods for indel classification suffer from low specificities, severely limiting clinical utility. Here, we further develop our variant effect scoring tool (VEST) to include the classification of in-frame and frameshift indels (VEST-indel) as pathogenic or benign. We apply 24 features, including a new “PubMed” feature, to estimate a gene's importance in human disease. When compared with four existing indel classifiers, our method achieves a drastically reduced false-positive rate, improving specificity by as much as 90%. This approach of estimating gene importance might be generally applicable to missense and other bioinformatics pathogenicity predictors, which often fail to achieve high specificity. Finally, we tested all possible meta-predictors that can be obtained from combining the four different indel classifiers using Boolean conjunctions and disjunctions, and derived a meta-predictor with improved performance over any individual method
Insights into hominid evolution from the gorilla genome sequence.
Gorillas are humans' closest living relatives after chimpanzees, and are of comparable importance for the study of human origins and evolution. Here we present the assembly and analysis of a genome sequence for the western lowland gorilla, and compare the whole genomes of all extant great ape genera. We propose a synthesis of genetic and fossil evidence consistent with placing the human-chimpanzee and human-chimpanzee-gorilla speciation events at approximately 6 and 10 million years ago. In 30% of the genome, gorilla is closer to human or chimpanzee than the latter are to each other; this is rarer around coding genes, indicating pervasive selection throughout great ape evolution, and has functional consequences in gene expression. A comparison of protein coding genes reveals approximately 500 genes showing accelerated evolution on each of the gorilla, human and chimpanzee lineages, and evidence for parallel acceleration, particularly of genes involved in hearing. We also compare the western and eastern gorilla species, estimating an average sequence divergence time 1.75 million years ago, but with evidence for more recent genetic exchange and a population bottleneck in the eastern species. The use of the genome sequence in these and future analyses will promote a deeper understanding of great ape biology and evolution
Analysis of protein-coding genetic variation in 60,706 humans
Large-scale reference data sets of human genetic variation are critical for the medical and functional interpretation of DNA sequence changes. We describe the aggregation and analysis of high-quality exome (protein-coding region) sequence data for 60,706 individuals of diverse ethnicities generated as part of the Exome Aggregation Consortium (ExAC). This catalogue of human genetic diversity contains an average of one variant every eight bases of the exome, and provides direct evidence for the presence of widespread mutational recurrence. We have used this catalogue to calculate objective metrics of pathogenicity for sequence variants, and to identify genes subject to strong selection against various classes of mutation; identifying 3,230 genes with near-complete depletion of truncating variants with 72% having no currently established human disease phenotype. Finally, we demonstrate that these data can be used for the efficient filtering of candidate disease-causing variants, and for the discovery of human “knockout” variants in protein-coding genes
Comprehensive Rare Variant Analysis via Whole-Genome Sequencing to Determine the Molecular Pathology of Inherited Retinal Disease
Inherited retinal disease is a common cause of visual impairment and represents a highly heterogeneous group of conditions. Here, we present findings from a cohort of 722 individuals with inherited retinal disease, who have had whole-genome sequencing (n = 605), whole-exome sequencing (n = 72), or both (n = 45) performed, as part of the NIHR-BioResource Rare Diseases research study. We identified pathogenic variants (single-nucleotide variants, indels, or structural variants) for 404/722 (56%) individuals. Whole-genome sequencing gives unprecedented power to detect three categories of pathogenic variants in particular: structural variants, variants in GC-rich regions, which have significantly improved coverage compared to whole-exome sequencing, and variants in non-coding regulatory regions. In addition to previously reported pathogenic regulatory variants, we have identified a previously unreported pathogenic intronic variant in in two males with choroideremia. We have also identified 19 genes not previously known to be associated with inherited retinal disease, which harbor biallelic predicted protein-truncating variants in unsolved cases. Whole-genome sequencing is an increasingly important comprehensive method with which to investigate the genetic causes of inherited retinal disease.This work was supported by The National Institute for Health Research England (NIHR) for the NIHR BioResource – Rare Diseases project (grant number RG65966). The Moorfields Eye Hospital cohort of patients and clinical and imaging data were ascertained and collected with the support of grants from the National Institute for Health Research Biomedical Research Centre at Moorfields Eye Hospital, National Health Service Foundation Trust, and UCL Institute of Ophthalmology, Moorfields Eye Hospital Special Trustees, Moorfields Eye Charity, the Foundation Fighting Blindness (USA), and Retinitis Pigmentosa Fighting Blindness. M.M. is a recipient of an FFB Career Development Award. E.M. is supported by UCLH/UCL NIHR Biomedical Research Centre. F.L.R. and D.G. are supported by Cambridge NIHR Biomedical Research Centre
Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an
Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis
- …