9,206 research outputs found

    Parameter estimation on gravitational waves from multiple coalescing binaries

    Full text link
    Future ground-based and space-borne interferometric gravitational-wave detectors may capture between tens and thousands of binary coalescence events per year. There is a significant and growing body of work on the estimation of astrophysically relevant parameters, such as masses and spins, from the gravitational-wave signature of a single event. This paper introduces a robust Bayesian framework for combining the parameter estimates for multiple events into a parameter distribution of the underlying event population. The framework can be readily deployed as a rapid post-processing tool

    Representation recovers information

    Get PDF
    Early agreement within cognitive science on the topic of representation has now given way to a combination of positions. Some question the significance of representation in cognition. Others continue to argue in favor, but the case has not been demonstrated in any formal way. The present paper sets out a framework in which the value of representation-use can be mathematically measured, albeit in a broadly sensory context rather than a specifically cognitive one. Key to the approach is the use of Bayesian networks for modeling the distal dimension of sensory processes. More relevant to cognitive science is the theoretical result obtained, which is that a certain type of representational architecture is *necessary* for achievement of sensory efficiency. While exhibiting few of the characteristics of traditional, symbolic encoding, this architecture corresponds quite closely to the forms of embedded representation now being explored in some embedded/embodied approaches. It becomes meaningful to view that type of representation-use as a form of information recovery. A formal basis then exists for viewing representation not so much as the substrate of reasoning and thought, but rather as a general medium for efficient, interpretive processing

    Cardiac health risk stratification system (CHRiSS): A Bayesian-based decision support system for left ventricular assist device (LVAD) therapy

    Get PDF
    This study investigated the use of Bayesian Networks (BNs) for left ventricular assist device (LVAD) therapy; a treatment for end-stage heart failure that has been steadily growing in popularity over the past decade. Despite this growth, the number of LVAD implants performed annually remains a small fraction of the estimated population of patients who might benefit from this treatment. We believe that this demonstrates a need for an accurate stratification tool that can help identify LVAD candidates at the most appropriate point in the course of their disease. We derived BNs to predict mortality at five endpoints utilizing the Interagency Registry for Mechanically Assisted Circulatory Support (INTERMACS) database: containing over 12,000 total enrolled patients from 153 hospital sites, collected since 2006 to the present day, and consisting of approximately 230 pre-implant clinical variables. Synthetic minority oversampling technique (SMOTE) was employed to address the uneven proportion of patients with negative outcomes and to improve the performance of the models. The resulting accuracy and area under the ROC curve (%) for predicted mortality were 30 day: 94.9 and 92.5; 90 day: 84.2 and 73.9; 6 month: 78.2 and 70.6; 1 year: 73.1 and 70.6; and 2 years: 71.4 and 70.8. To foster the translation of these models to clinical practice, they have been incorporated into a web-based application, the Cardiac Health Risk Stratification System (CHRiSS). As clinical experience with LVAD therapy continues to grow, and additional data is collected, we aim to continually update these BN models to improve their accuracy and maintain their relevance. Ongoing work also aims to extend the BN models to predict the risk of adverse events post-LVAD implant as additional factors for consideration in decision making

    Lecture notes on ridge regression

    Full text link
    The linear regression model cannot be fitted to high-dimensional data, as the high-dimensionality brings about empirical non-identifiability. Penalized regression overcomes this non-identifiability by augmentation of the loss function by a penalty (i.e. a function of regression coefficients). The ridge penalty is the sum of squared regression coefficients, giving rise to ridge regression. Here many aspect of ridge regression are reviewed e.g. moments, mean squared error, its equivalence to constrained estimation, and its relation to Bayesian regression. Finally, its behaviour and use are illustrated in simulation and on omics data. Subsequently, ridge regression is generalized to allow for a more general penalty. The ridge penalization framework is then translated to logistic regression and its properties are shown to carry over. To contrast ridge penalized estimation, the final chapter introduces its lasso counterpart

    Probabilistic Programming Concepts

    Full text link
    A multitude of different probabilistic programming languages exists today, all extending a traditional programming language with primitives to support modeling of complex, structured probability distributions. Each of these languages employs its own probabilistic primitives, and comes with a particular syntax, semantics and inference procedure. This makes it hard to understand the underlying programming concepts and appreciate the differences between the different languages. To obtain a better understanding of probabilistic programming, we identify a number of core programming concepts underlying the primitives used by various probabilistic languages, discuss the execution mechanisms that they require and use these to position state-of-the-art probabilistic languages and their implementation. While doing so, we focus on probabilistic extensions of logic programming languages such as Prolog, which have been developed since more than 20 years
    • ā€¦
    corecore