36 research outputs found

    Clinical Quantitative Electromyography

    Get PDF
    No abstract available

    Learning Bayesian Networks with Incomplete Data by Augmentation

    Get PDF
    We present new algorithms for learning Bayesian networks from data with missing values using a data augmentation approach. An exact Bayesian network learning algorithm is obtained by recasting the problem into a standard Bayesian network learning problem without missing data. To the best of our knowledge, this is the first exact algorithm for this problem. As expected, the exact algorithm does not scale to large domains. We build on the exact method to create an approximate algorithm using a hill-climbing technique. This algorithm scales to large domains so long as a suitable standard structure learning method for complete data is available. We perform a wide range of experiments to demonstrate the benefits of learning Bayesian networks with such new approach

    Continual Learning with Adaptive Weights (CLAW)

    Get PDF
    Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner. Recently, several frameworks have been developed which enable deep learning to be deployed in this learning scenario. A key modelling decision is to what extent the architecture should be shared across tasks. On the one hand, separately modelling each task avoids catastrophic forgetting but it does not support transfer learning and leads to large models. On the other hand, rigidly specifying a shared component and a task-specific part enables task transfer and limits the model size, but it is vulnerable to catastrophic forgetting and restricts the form of task-transfer that can occur. Ideally, the network should adaptively identify which parts of the network to share in a data driven way. Here we introduce such an approach called Continual Learning with Adaptive Weights (CLAW), which is based on probabilistic modelling and variational inference. Experiments show that CLAW achieves state-of-the-art performance on six benchmarks in terms of overall continual learning performance, as measured by classification accuracy, and in terms of addressing catastrophic forgetting

    Conditional Learning of Fair Representations

    Get PDF
    We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups in the classification setting. Two key components underpinning the design of our algorithm are balanced error rate and conditional alignment of representations. We show how these two components contribute to ensuring accuracy parity and equalized false-positive and false-negative rates across groups without impacting demographic parity. Furthermore, we also demonstrate both in theory and on two real-world experiments that the proposed algorithm leads to a better utility-fairness trade-off on balanced datasets compared with existing algorithms on learning fair representations for classification

    Generative Multiple-Instance Learning Models For Quantitative Electromyography

    Full text link
    We present a comprehensive study of the use of generative modeling approaches for Multiple-Instance Learning (MIL) problems. In MIL a learner receives training instances grouped together into bags with labels for the bags only (which might not be correct for the comprised instances). Our work was motivated by the task of facilitating the diagnosis of neuromuscular disorders using sets of motor unit potential trains (MUPTs) detected within a muscle which can be cast as a MIL problem. Our approach leads to a state-of-the-art solution to the problem of muscle classification. By introducing and analyzing generative models for MIL in a general framework and examining a variety of model structures and components, our work also serves as a methodological guide to modelling MIL tasks. We evaluate our proposed methods both on MUPT datasets and on the MUSK1 dataset, one of the most widely used benchmarks for MIL.Comment: Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI2013

    ASCM: An accelerated soft c-means clustering algorithm

    No full text
    The advantages of soft c-means over its hard and fuzzy versions render it more attractive to use in a wide variety of applications. Its main merit lies in its relatively higher convergence speed, which is more obvious in the presence of huge high dimensional data. This work presents a new approach to accelerate the convergence of the original soft c-means. It is mainly based on an iterative optimization approach and a relaxation technique. Several low and high dimensional datasets are used to evaluate the performance of the proposed approach. Experimental results show up to 70% improvement over the original soft and fuzzy c-means algorithms

    A Probabilistic Covariate Shift Assumption for Domain Adaptation

    No full text
    The aim of domain adaptation algorithms is to establish a learner, trained on labeled data from a source domain, that can classify samples from a target domain, in which few or no labeled data are available for training. Covariate shift, a primary assumption in several works on domain adaptation, assumes that the labeling functions of source and target domains are identical. We present a domain adaptation algorithm that assumes a relaxed version of covariate shift where the assumption that the labeling functions of the source and target domains are identical holds with a certain probability. Assuming a source deterministic large margin binary classifier, the farther a target instance is from the source decision boundary, the higher the probability that covariate shift holds. In this context, given a target unlabeled sample and no target labeled data, we develop a domain adaptation algorithm that bases its labeling decisions both on the source learner and on the similarities between the target unlabeled instances. The source labeling function decisions associated with probabilistic covariate shift, along with the target similarities are concurrently expressed on a similarity graph. We evaluate our proposed algorithm on a benchmark sentiment analysis (and domain adaptation) dataset, where state-of-the-art adaptation results are achieved. We also derive a lower bound on the performance of the algorithm

    A General Model for Side Information in Neural Networks

    No full text
    We investigate the utility of side information in the context of machine learning and, in particular, in supervised neural networks. Side information can be viewed as expert knowledge, additional to the input, that may come from a knowledge base. Unlike other approaches, our formalism can be used by a machine learning algorithm not only during training but also during testing. Moreover, the proposed approach is flexible as it caters for different formats of side information, and we do not constrain the side information to be fed into the input layer of the network. A formalism is presented based on the difference between the neural network loss without and with side information, stating that it is useful when adding side information reduces the loss during the test phase. As a proof of concept we provide experimental results for two datasets, the MNIST dataset of handwritten digits and the House Price prediction dataset. For the experiments we used feedforward neural networks containing two hidden layers, as well as a softmax output layer. For both datasets, side information is shown to be useful in that it improves the classification accuracy significantly
    corecore