171,402 research outputs found

    A Logical Characterization of Constraint-Based Causal Discovery

    Get PDF
    We present a novel approach to constraint-based causal discovery, that takes the form of straightforward logical inference, applied to a list of simple, logical statements about causal relations that are derived directly from observed (in)dependencies. It is both sound and complete, in the sense that all invariant features of the corresponding partial ancestral graph (PAG) are identified, even in the presence of latent variables and selection bias. The approach shows that every identifiable causal relation corresponds to one of just two fundamental forms. More importantly, as the basic building blocks of the method do not rely on the detailed (graphical) structure of the corresponding PAG, it opens up a range of new opportunities, including more robust inference, detailed accountability, and application to large models

    Distributed Bayesian Probabilistic Matrix Factorization

    Full text link
    Matrix factorization is a common machine learning technique for recommender systems. Despite its high prediction accuracy, the Bayesian Probabilistic Matrix Factorization algorithm (BPMF) has not been widely used on large scale data because of its high computational cost. In this paper we propose a distributed high-performance parallel implementation of BPMF on shared memory and distributed architectures. We show by using efficient load balancing using work stealing on a single node, and by using asynchronous communication in the distributed version we beat state of the art implementations

    Interpretable ECG beat embedding using disentangled variational auto-encoders

    Get PDF
    Electrocardiogram signals are often used in medicine. An important aspect of analyzing this data is identifying and classifying the type of beat. This classification is often done through an automated algorithm. Recent advancements in neural networks and deep learning have led to high classification accuracy. However, adoption of neural network models into clinical practice is limited due to the black-box nature of the classification method. In this work, the use of variational auto encoders to learn human-interpretable encodings for the beat types is analyzed. It is demonstrated that using this method, an interpretable and explainable representation of normal and paced beats can be achieved with neural networks
    corecore