290 research outputs found

    Context–aware Learning for Generative Models

    Get PDF
    This work studies the class of algorithms for learning with side-information that emerges by extending generative models with embedded context-related variables. Using finite mixture models (FMMs) as the prototypical Bayesian network, we show that maximum-likelihood estimation (MLE) of parameters through expectation-maximization (EM) improves over the regular unsupervised case and can approach the performances of supervised learning, despite the absence of any explicit ground-truth data labeling. By direct application of the missing information principle (MIP), the algorithms' performances are proven to range between the conventional supervised and unsupervised MLE extremities proportionally to the information content of the contextual assistance provided. The acquired benefits regard higher estimation precision, smaller standard errors, faster convergence rates, and improved classification accuracy or regression fitness shown in various scenarios while also highlighting important properties and differences among the outlined situations. Applicability is showcased with three real-world unsupervised classification scenarios employing Gaussian mixture models. Importantly, we exemplify the natural extension of this methodology to any type of generative model by deriving an equivalent context-aware algorithm for variational autoencoders (VAs), thus broadening the spectrum of applicability to unsupervised deep learning with artificial neural networks. The latter is contrasted with a neural-symbolic algorithm exploiting side information

    Context-Aware Brain-Computer Interfaces

    Get PDF
    Systems using brain-generated signals can control complex, smart devices by taking into account information about the situation at hand, as well as the operator’s cognitive state

    Learning from EEG Error-related Potentials in Noninvasive Brain-Computer Interfaces

    Get PDF
    We describe error-related potentials generated while a human user monitors the performance of an external agent and discuss their use for a new type of Brain-Computer Interaction. In this approach, single trial detection of error-related EEG potentials is used to infer the optimal agent behavior by decreasing the probability of agent decisions that elicited such potentials. Contrasting with traditional approaches, the user acts as a critic of an external autonomous system instead of continuously generating control commands. This sets a cognitive monitoring loop where the human directly provides information about the overall system performance that, in turn, can be used for its improvement. We show that it is possible to recognize erroneous and correct agent decisions from EEG (average recognition rates of 75.8% and 63.2%, respectively), and that the elicited signals are stable over long periods of time (from 50 to >>600 days). Moreover, these performances allow to infer the optimal behavior of a simple agent in a Brain-Computer Interaction paradigm after a few trials

    Robust self-localisation and navigation based on hippocampal place cells

    Get PDF
    A computational model of the hippocampal function in spatial learning is presented. A spatial representation is incrementally acquired during exploration. Visual and self-motion information is fed into a network of rate-coded neurons. A consistent and stable place code emerges by unsupervised Hebbian learning between place- and head direction cells. Based on this representation, goal-oriented navigation is learnt by applying a reward-based learning mechanism between the hippocampus and nucleus accumbens. The model, validated on a real and simulated robot, successfully localises itself by recalibrating its path integrator using visual input. A navigation map is learnt after about 20 trials, comparable to rats in the water maze. In contrast to previous works, this system processes realistic visual input. No compass is needed for localisation and the reward-based learning mechanism extends discrete navigation models to continuous space. The model reproduces experimental findings and suggests several neurophysiological and behavioural predictions in the rat. (c) 2005 Elsevier Ltd

    Learning dictionaries of spatial and temporal EEG primitives for brain-computer interfaces

    Get PDF
    Sparse methods are widely used in image and audio processing for denoising and classification, but there have been few previous applications to neural signals for brain-computer interfaces (BCIs). We used the dictionary- learning algorithm K-SVD, coupled with Orthogonal Matching Pursuit, to learn dictionaries of spatial and temporal EEG primitives. We applied these to P300 and ErrP data to denoise the EEG and better estimate the underlying P300 and ErrP signals. This methodology improved single-trial classification performance across 13 of 14 subjects, indicating that some of the background noise in EEG signals, presumably from neural or muscular sources, is highly structured. Furthermore, this structure can be captured via dictionary learning and sparse coding algorithms, and exploited to improve BCIs

    Error Potentials for Brain-Computer Interfaces

    Get PDF
    The idea to use EEG correlates of errors to correct or reinforce BCI operation has been proposed over a decade ago. Since then a body of evidence has corroborated this approach. In this paper we give an overview of our recent work exploring the possibilities of error-potential applications, involving removing constraints of laboratory paradigms to increase “real-life” validity, and investigating EEG feature-spaces to increase detection robustness
    • 

    corecore