9 research outputs found

    ASSESSMENT OF EPILEPSY CLASSIFICATION USING TECHNIQUES SUCH AS SINGULAR VALUE DECOMPOSITION, APPROXIMATE ENTROPY, AND WEIGHTED K-NEAREST NEIGHBORS MEASURES

    Get PDF
    Objective: The main aim of this research is to reduce the dimension of the epileptic Electroencephalography (EEG) signals and then classify it usingvarious post classifiers. For the evaluation and easy treatment of neurological diseases, EEG signals are used. The reflection of the electrical activitiesof the human brain is obtained by the measurement of potentials in EEG. To study and explore the brain functions in an exhaustive manner, EEG is usedby both physicians and scientists. The study of the electrical activity of the brain which is done through EEG recording is a vital tool for the diagnosis ofmany neurological diseases which include epilepsy, sleep disorders, injuries in head, dementia etc. One of the most commonly occurring and prevalentneurological disorders is epilepsy and it is easily characterized by recurrent seizures.Methods: This paper employs the concept of dimensionality reduction concepts like Fuzzy Mutual Information (FMI), Independent ComponentAnalysis (ICA), Linear Graph Embedding (LGE), Linear Discriminant Analysis (LDA) and finally Variational Bayesian Matrix Factorization (VBMF).The epilepsy risk levels are also classified using post classifiers like Singular Value Decomposition (SVD), Approximate Entropy (ApEn) and WeightedKNN (W-KNN) classifiers.Results: The highest accuracy is obtained when LDA is combined with Weighted KNN (W-KNN) Classifiers and it is of 97.18%. Conclusion: Thus the EEG signals not only represent the brain function but also the status of the whole body. The best result obtained was whenLDA is engaged as a dimensionality reduction technique followed by the usage of the W-KNN as post classifier for the classification of epilepsy risklevels from EEG signals. Future work may incorporate the possible usage of different dimensionality reduction techniques with various other types ofclassifiers for the perfect classification of epilepsy risk levels from EEG signals.Keywords: FMI, ICA, LGE, LDA, W-KNN, EE

    Variational Gaussian Inference for Bilinear Models of Count Data

    Get PDF
    Bilinear models of count data with Poisson distribution are popular in applications such as matrix factorization for recommendation systems, modeling of receptive fields of sensory neurons, and modeling of neural-spike trains. Bayesian inference in such models remains challenging due to the product term of two Gaussian random vectors. In this paper, we propose new algorithms for such models based on variational Gaussian (VG) inference. We make two contributions. First, we show that the VG lower bound for these models, previously known to be intractable, is available in closed form under certain non-trivial constraints on the form of the posterior. Second, we show that the lower bound is biconcave and can be efficiently optimized for mean-field approximations. We also show that bi-concavity generalizes to the larger family of log-concave likelihoods, that subsume the Poisson distribution. We present new inference algorithms based on these results and demonstrate better performance on real-world problems at the cost of a modest increase in computation. Our contributions in this paper, therefore, provide more choices for Bayesian inference in terms of a speed-vs-accuracy tradeoff

    Polya-gamma augmentations for factor models

    Get PDF
    Jufo_ID 71804.Bayesian inference for latent factor models, such as principal component and canonical correlation analysis, is easy for Gaussian likelihoods with conjugate priors using both Gibbs sampling and mean-field variational approximation. For other likelihood potentials one needs to either resort to more complex sampling schemes or to specifying dedicated forms for variational lower bounds. Recently, however, it was shown that for specific likelihoods related to the logistic function it is possible to augment the joint density with auxiliary variables following a P`olya-Gamma distribution, leading to closed-form updates for binary and over-dispersed count models. In this paper we describe how Gibbs sampling and mean-field variational approximation for various latent factor models can be implemented for these cases, presenting easy-to-implement and efficient inference schemas.Peer reviewe

    Expectation Propagation for Rectified Linear Poisson Regression

    Get PDF
    The Poisson likelihood with rectified linear function as non-linearity is a physically plausible model to discribe the stochastic arrival process of photons or other particles at a detector. At low emission rates the discrete nature of this process leads to measurement noise that behaves very differently from additive white Gaussian noise. To address the intractable inference problem for such models, we present a novel efficient and robust Expectation Propagation algorithm entirely based on analytically tractable computations operating re- liably in regimes where quadrature based implementations can fail. Full posterior inference therefore becomes an attractive alternative in areas generally dominated by methods of point estimation. Moreover, we discuss the rectified linear function in the context of other common non-linearities and identify situations where it can serve as a robust alternative

    Applications of Approximate Learning and Inference for Probabilistic Models

    Get PDF
    We develop approximate inference and learning methods for facilitating the use of probabilistic modeling techniques motivated by applications in two different areas. First, we consider the ill-posed inverse problem of recovering an image from an underdetermined system of linear measurements corrupted by noise. Second, we consider the problem of inferring user preferences for items from counts, pairwise comparisons and user activity logs, instances of implicit feedback. Plausible models for images and the noise, incurred when recording them, render posterior inference intractable, while the scale of the inference problem makes sampling based approximations ineffective. Therefore, we develop deterministic approximate inference algorithms for two different augmentations of a typical sparse linear model: first, for the rectified-linear Poisson likelihood, and second, for tree-structured super-Gaussian mixture models. The rectified-linear Poisson likelihood is an alternative noise model, applicable in astronomical and biomedical imaging applications, that operate in intensity regimes in which quantum effects lead to observations that are best described by counts of particles arriving at a sensor, as well as in general Poisson regression problems arising in various fields. In this context we show, that the model-specific computations for Expectation Propagation can be robustly solved by a simple dynamic program. Next, we develop a scalable approximate inference algorithm for structured mixture models, that uses a discrete graphical model to represent dependencies between the latent mixture components of a collection of mixture models. Specifically, we use tree-structured mixtures of super-Gaussians to model the persistence across scales of large coefficients of the Wavelet transform of an image for improved reconstruction. In the second part on models of user preference, we consider two settings: the global static and the contextual dynamic setting. In the global static setting, we represent user-item preferences by a latent low-rank matrix. Instead of using numeric ratings we develop methods to infer this latent representation for two types of implicit feedback: aggregate counts of users interacting with a service and the binary outcomes of pairwise comparisons. We model count data using a latent Gaussian bilinear model with Poisson likelihoods. For this model, we show that the Variational Gaussian approximation can be further relaxed to be available in closed-form by adding additional constraints, leading to an efficient inference algorithm. In the second implicit feedback scenario, we infer the latent preference matrix from pairwise preference statements. We combine a low-rank bilinear model with non-parameteric item- feature regression and develop a novel approximate variational Expectation Maximization algorithm that mitigates the computational challenges due to latent couplings induced by the pairwise comparisons. Finally, in the contextual dynamic setting, we model sequences of user activity at the granularity of single interaction events instead of aggregate counts. Routinely gathered in the background at a large scale in many applications, such sequences can reveal temporal and contextual aspects of user behavior through recurrent patterns. To describe such data, we propose a generic collaborative sequence model based on recurrent neural networks, that combines ideas from collaborative filtering and language modeling

    Multitask and transfer learning for multi-aspect data

    Get PDF
    Supervised learning aims to learn functional relationships between inputs and outputs. Multitask learning tackles supervised learning tasks by performing them simultaneously to exploit commonalities between them. In this thesis, we focus on the problem of eliminating negative transfer in order to achieve better performance in multitask learning. We start by considering a general scenario in which the relationship between tasks is unknown. We then narrow our analysis to the case where data are characterised by a combination of underlying aspects, e.g., a dataset of images of faces, where each face is determined by a person's facial structure, the emotion being expressed, and the lighting conditions. In machine learning there have been numerous efforts based on multilinear models to decouple these aspects but these have primarily used techniques from the field of unsupervised learning. In this thesis we take inspiration from these approaches and hypothesize that supervised learning methods can also benefit from exploiting these aspects. The contributions of this thesis are as follows: 1. A multitask learning and transfer learning method that avoids negative transfer when there is no prescribed information about the relationships between tasks. 2. A multitask learning approach that takes advantage of a lack of overlapping features between known groups of tasks associated with different aspects. 3. A framework which extends multitask learning using multilinear algebra, with the aim of learning tasks associated with a combination of elements from different aspects. 4. A novel convex relaxation approach that can be applied both to the suggested framework and more generally to any tensor recovery problem. Through theoretical validation and experiments on both synthetic and real-world datasets, we show that the proposed approaches allow fast and reliable inferences. Furthermore, when performing learning tasks on an aspect of interest, accounting for secondary aspects leads to significantly more accurate results than using traditional approaches
    corecore