534 research outputs found

    A Bayesian approach to correct for unmeasured or semi-unmeasured confounding in survival data using multiple validation data sets

    Get PDF
    Purpose: The existence of unmeasured confounding can clearly undermine the validity of an observational study. Methods of conducting sensitivity analyses to evaluate the impact of unmeasured confounding are well established. However, application of such methods to survival data (ā€œtime-to-eventā€ outcomes) have received little attention in the literature. The purpose of this study is to propose a novel Bayesian method to account for unmeasured confounding for survival data.   Methods: The Bayesian method is proposed under an assumption that the supplementary information on unmeasured confounding in the form of internal validation data, external validation data or expert elicited prior distributions is available. The method for incorporating such information to Cox proportional hazard model is described.  Simulation studies are performed based on the recently published instrumental variable method to assess the impact of unmeasured confounding and to illustrate the improvement of the proposed method over the naĆÆve model which ignores unmeasured confounding.   Results: Simulation studies illustrate the impact of ignoring the unmeasured confounding and the effectiveness of our Bayesian approach. The corrected model had significantly less bias and coverage of 95% intervals much closer to nominal.   Conclusion: The proposed Bayesian method provides a useful and flexible tool in incorporating different types of supplemental information on unmeasured confounding to adjust the treatment estimates when the outcome is survival data.  It out-performed the naĆÆve model in simulation studies based on a real world study. &nbsp

    Earliest archaeological evidence of persistent hominin carnivory

    Get PDF
    The emergence of lithic technology by ~2.6 million years ago (Ma) is often interpreted as a correlate of increasingly recurrent hominin acquisition and consumption of animal remains. Associated faunal evidence, however, is poorly preserved prior to ~1.8 Ma, limiting our understanding of early archaeological (Oldowan) hominin carnivory. Here, we detail three large well-preserved zooarchaeological assemblages from Kanjera South, Kenya. The assemblages date to ~2.0 Ma, pre-dating all previously published archaeofaunas of appreciable size. At Kanjera, there is clear evidence that Oldowan hominins acquired and processed numerous, relatively complete, small ungulate carcasses. Moreover, they had at least occasional access to the fleshed remains of larger, wildebeest-sized animals. The overall record of hominin activities is consistent through the stratified sequence ??? spanning hundreds to thousands of years ??? and provides the earliest archaeological evidence of sustained hominin involvement with fleshed animal remains (i.e., persistent carnivory), a foraging adaptation central to many models of hominin evolution.This research was supported by funding from the National Science Foundation, Leakey Foundation, Wenner-Gren Foundation, National Geographic Society, The Leverhulme Trust, University of California, Baylor University, and the City University of New York. Additional logistical support was provided by the Smithsonian Institution???s Human Origins Program and the Peter Buck Fund for Human Origins Research, the British Institute in Eastern Africa, and the National Museums of Kenya. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript

    Allowing for missing outcome data and incomplete uptake of randomised interventions, with application to an Internet-based alcohol trial

    Get PDF
    Missing outcome data and incomplete uptake of randomised interventions are common problems, which complicate the analysis and interpretation of randomised controlled trials, and are rarely addressed well in practice. To promote the implementation of recent methodological developments, we describe sequences of randomisation-based analyses that can be used to explore both issues. We illustrate these in an Internet-based trial evaluating the use of a new interactive website for those seeking help to reduce their alcohol consumption, in which the primary outcome was available for less than half of the participants and uptake of the intervention was limited

    Intra-Spike Crosslinking Overcomes Antibody Evasion by HIV-1

    Get PDF
    Antibodies developed during HIV-1 infection lose efficacy as the viral spike mutates. We postulated that anti-HIV-1 antibodies primarily bind monovalently because HIVā€™s low spike density impedes bivalent binding through inter-spike crosslinking, and the spike structure prohibits bivalent binding through intra-spike crosslinking. Monovalent binding reduces avidity and potency, thus expanding the range of mutations permitting antibody evasion. To test this idea, we engineered antibody-based molecules capable of bivalent binding through intra-spike crosslinking. We used DNA as a ā€œmolecular rulerā€ to measure intra-epitope distances on virion-bound spikes and construct intra-spike crosslinking molecules. Optimal bivalent reagents exhibited up to 2.5 orders of magnitude increased potency (>100-fold average increases across virus panels) and identified conformational states of virion-bound spikes. The demonstration that intra-spike crosslinking lowers the concentration of antibodies required for neutralization supports the hypothesis that low spike densities facilitate antibody evasion and the use of molecules capable of intra-spike crosslinking for therapy or passive protection

    Joint modelling rationale for chained equations

    Get PDF
    BACKGROUND: Chained equations imputation is widely used in medical research. It uses a set of conditional models, so is more flexible than joint modelling imputation for the imputation of different types of variables (e.g. binary, ordinal or unordered categorical). However, chained equations imputation does not correspond to drawing from a joint distribution when the conditional models are incompatible. Concurrently with our work, other authors have shown the equivalence of the two imputation methods in finite samples. METHODS: Taking a different approach, we prove, in finite samples, sufficient conditions for chained equations and joint modelling to yield imputations from the same predictive distribution. Further, we apply this proof in four specific cases and conduct a simulation study which explores the consequences when the conditional models are compatible but the conditions otherwise are not satisfied. RESULTS: We provide an additional ā€œnon-informative marginsā€ condition which, together with compatibility, is sufficient. We show that the non-informative margins condition is not satisfied, despite compatible conditional models, in a situation as simple as two continuous variables and one binary variable. Our simulation study demonstrates that as a consequence of this violation order effects can occur; that is, systematic differences depending upon the ordering of the variables in the chained equations algorithm. However, the order effects appear to be small, especially when associations between variables are weak. CONCLUSIONS: Since chained equations is typically used in medical research for datasets with different types of variables, researchers must be aware that order effects are likely to be ubiquitous, but our results suggest they may be small enough to be negligibl

    Comparing Four Approaches for Technical Debt Identification

    Get PDF
    Background: Software systems accumulate technical debt (TD) when short-term goals in software development are traded for long term goals (e.g., quick-and-dirty implementation to reach a release date vs. a well-refactored implementation that supports the long term health of the project). Some forms of TD accumulate over time in the form of source code that is difficult to work with and exhibits a variety of anomalies. A number of source code analysis techniques and tools have been proposed to potentially identify the code-level debt accumulated in a system. What has not yet been studied is if using multiple tools to detect TD can lead to benefits, i.e. if different tools will flag the same or different source code components. Further, these techniques also lack investigation into the symptoms of TD "interest" that they lead to. To address this latter question, we also investigated whether TD, as identified by the source code analysis techniques, correlates with interest payments in the form of increased defect- and change-proneness. Aims: Comparing the results of different TD identification approaches to understand their commonalities and differences and to evaluate their relationship to indicators of future TD "interest". Method: We selected four different TD identification techniques (code smells, automatic static analysis (ASA) issues, grime buildup, and modularity violations) and applied them to 13 versions of the Apache Hadoop open source software project. We collected and aggregated statistical measures to investigate whether the different techniques identified TD indicators in the same or different classes and whether those classes in turn exhibited high interest (in the form of a large number of defects and higher change proneness). Results: The outputs of the four approaches have very little overlap and are therefore pointing to different problems in the source code. Dispersed coupling and modularity violations were co-located in classes with higher defect proneness. We also observed a strong relationship between modularity violations and change proneness. Conclusions: Our main contribution is an initial overview of the TD landscape, showing that different TD techniques are loosely coupled and therefore indicate problems in different locations of the source code. Moreover, our proxy interest indicators (change- and defect-proneness) correlate with only a small subset of TD indicator

    Robust Digital Holography For Ultracold Atom Trapping

    Full text link
    We have formulated and experimentally demonstrated an improved algorithm for design of arbitrary two-dimensional holographic traps for ultracold atoms. Our method builds on the best previously available algorithm, MRAF, and improves on it in two ways. First, it allows for creation of holographic atom traps with a well defined background potential. Second, we experimentally show that for creating trapping potentials free of fringing artifacts it is important to go beyond the Fourier approximation in modelling light propagation. To this end, we incorporate full Helmholtz propagation into our calculations.Comment: 7 pages, 4 figure
    • ā€¦
    corecore