7 research outputs found

    A Meta-Reinforcement Learning Algorithm for Causal Discovery

    Get PDF
    Uncovering the underlying causal structure of a phenomenon, domain or environment is of great scientific interest, not least because of the inferences that can be derived from such structures. Unfortunately though, given an environment, identifying its causal structure poses significant challenges. Amongst those are the need for costly interventions and the size of the space of possible structures that has to be searched. In this work, we propose a meta-reinforcement learning setup that addresses these challenges by learning a causal discovery algorithm, called Meta-Causal Discovery, or MCD. We model this algorithm as a policy that is trained on a set of environments with known causal structures to perform budgeted interventions. Simultaneously, the policy learns to maintain an estimate of the environment’s causal structure. The learned policy can then be used as a causal discovery algorithm to estimate the structure of environments in a matter of milliseconds. At test time, our algorithm performs well even in environments that induce previously unseen causal structures. We empirically show that MCD estimates good graphs compared to SOTA approaches on toy environments and thus constitutes a proof-of-concept of learning causal discovery algorithms

    An Energy-Based Variational Model of Ferromagnetic Hysteresis for Finite Element Computations

    Full text link
    This paper proposes a macroscopic model for ferromagnetic hysteresis that is well-suited for finite element implementation. The model is readily vectorial and relies on a consistent thermodynamic formulation. In particular, the stored magnetic energy and the dissipated energy are known at all times, and not solely after the completion of closed hysteresis loops as is usually the case. The obtained incremental formulation is variationally consistent, i.e., all internal variables follow from the minimization of a thermodynamic potential

    Comparative performance of intensive care mortality prediction models based on manually curated versus automatically extracted electronic health record data

    No full text
    INTRODUCTION: Benchmarking intensive care units for audit and feedback is frequently based on comparing actual mortality versus predicted mortality. Traditionally, mortality prediction models rely on a limited number of input variables and significant manual data entry and curation. Using automatically extracted electronic health record data may be a promising alternative. However, adequate data on comparative performance between these approaches is currently lacking. METHODS: The AmsterdamUMCdb intensive care database was used to construct a baseline APACHE IV in-hospital mortality model based on data typically available through manual data curation. Subsequently, new in-hospital mortality models were systematically developed and evaluated. New models differed with respect to the extent of automatic variable extraction, classification method, recalibration usage and the size of collection window. RESULTS: A total of 13 models were developed based on data from 5,077 admissions divided into a train (80%) and test (20%) cohort. Adding variables or extending collection windows only marginally improved discrimination and calibration. An XGBoost model using only automatically extracted variables, and therefore no acute or chronic diagnoses, was the best performing automated model with an AUC of 0.89 and a Brier score of 0.10. DISCUSSION: Performance of intensive care mortality prediction models based on manually curated versus automatically extracted electronic health record data is similar. Importantly, our results suggest that variables typically requiring manual curation, such as diagnosis at admission and comorbidities, may not be necessary for accurate mortality prediction. These proof-of-concept results require replication using multi-centre data
    corecore