102 research outputs found

    On robust spatial filtering of EEG in nonstationary environments

    Full text link

    Low-dimensional representations of neural time-series data with applications to peripheral nerve decoding

    Get PDF
    Bioelectronic medicines, implanted devices that influence physiological states by peripheral neuromodulation, have promise as a new way of treating diverse conditions from rheumatism to diabetes. We here explore ways of creating nerve-based feedback for the implanted systems to act in a dynamically adapting closed loop. In a first empirical component, we carried out decoding studies on in vivo recordings of cat and rat bladder afferents. In a low-resolution data-set, we selected informative frequency bands of the neural activity using information theory to then relate to bladder pressure. In a second high-resolution dataset, we analysed the population code for bladder pressure, again using information theory, and proposed an informed decoding approach that promises enhanced robustness and automatic re-calibration by creating a low-dimensional population vector. Coming from a different direction of more general time-series analysis, we embedded a set of peripheral nerve recordings in a space of main firing characteristics by dimensionality reduction in a high-dimensional feature-space and automatically proposed single efficiently implementable estimators for each identified characteristic. For bioelectronic medicines, this feature-based pre-processing method enables an online signal characterisation of low-resolution data where spike sorting is impossible but simple power-measures discard informative structure. Analyses were based on surrogate data from a self-developed and flexibly adaptable computer model that we made publicly available. The wider utility of two feature-based analysis methods developed in this work was demonstrated on a variety of datasets from across science and industry. (1) Our feature-based generation of interpretable low-dimensional embeddings for unknown time-series datasets answers a need for simplifying and harvesting the growing body of sequential data that characterises modern science. (2) We propose an additional, supervised pipeline to tailor feature subsets to collections of classification problems. On a literature standard library of time-series classification tasks, we distilled 22 generically useful estimators and made them easily accessible.Open Acces

    Towards population coding principles in the primate premotor and parietal grasping network

    Get PDF
    As humans, the only way for us to interact with the world around us is by utilizing our highly trained motor system. Therefore, understanding how the brain generates movement is essential to understanding all aspects of human behavior. Despite the importance the motor system, the manner in which the brain prepares and executes movements, especially grasping movements, is still unclear. In this thesis I undertake a number of electrophysiological and computational experiments on macaque monkeys, primates showing similar grasping behavior to humans, to shed light on how grasping movements are planned and executed across distributed brain regions in both parietal and premotor cortices. Through these experiments, I reveal how the use of large-scale electrophysiological recording of hundreds of neurons simultaneously in primates allows the investigation of network computational principles essential for grasping, and I develop a series of analytical techniques for dissecting the large data sets collected from these experiments. In chapter 2.1 I show how large-scale parallel recordings can be leveraged to make behavioral predictions on single trials. The methods used to extract single-trial predictions varied in their performance, but population-based methods provided the most consistent and meaningful interpretation of the data. In addition, the success of these behavioral predictions could be used to make inferences about how areas differ in their contribution to preparation of grasping movements. It was found that while reaction time could be predicted from the population activity of either area, performance was significantly higher using the data from premotor cortex, suggesting that population activity in premotor cortex may have a more direct effect on behavior. In chapter 2.2 I show how preparation and movement intermingle and interact with one another on the continuum between immediate and withheld movement. Our population-based and dimensionality reduction techniques enable interpretation of the data, even when single neuron tuning properties are highly temporally and functionally complex. Activity in parietal cortex stabilizes during the memory period, while it continues to evolve in premotor cortex, revealing a decodable signature of time. Furthermore, activity during movement initiation clusters into two groups, movements initiated as fast as possible and movements from memory, showing how a state shift likely occurs on the border between these two types of actions. In chapter 2.3 I show that the question of how motor cortex controls movement is an ongoing issue in the field. I address crucial details about recent methodology used to extract rotational dynamics in motor cortex. I show how a simple neural network simulation and novel statistical test reveal properties of motor cortex not examined before, showing how models of movement generation can be essential tools in adding perspective to empirical results. Finally, in chapter 2.4 I show how the specificity of hand use can be used as a tool to dissociate levels of abstraction in the visual to motor transformation in parietal and premotor cortex. While preparatory activity is mostly hand-invariant in parietal cortex, activity in premotor cortex dissociates the intended hand use well before movement. Importantly, we show how appropriate dimensionality reduction techniques can disentangle the effects of multiple task parameters and find latent dimensions consistent between areas and animals. Together, the results of my experiments reinforce the importance of seeing the motor system not as a collection of individually tuned neurons, but as a dynamic network of neurons continuously acting together to produce the complex and flexible behavior we observe in all primates

    Physics based supervised and unsupervised learning of graph structure

    Get PDF
    Graphs are central tools to aid our understanding of biological, physical, and social systems. Graphs also play a key role in representing and understanding the visual world around us, 3D-shapes and 2D-images alike. In this dissertation, I propose the use of physical or natural phenomenon to understand graph structure. I investigate four phenomenon or laws in nature: (1) Brownian motion, (2) Gauss\u27s law, (3) feedback loops, and (3) neural synapses, to discover patterns in graphs

    A Bayesian machine learning framework for true zero-training brain-computer interfaces

    Get PDF
    Brain-Computer Interfaces (BCI) are developed to allow the user to take control of a computer (e.g. a spelling application) or a device (e.g. a robotic arm) by using just his brain signals. The concept of BCI was introduced in 1973 by Jacques Vidal. The early types of BCI relied on tedious user training to enable them to modulate their brain signals such that they can take control over the computer. Since then, training has shifted from the user to the computer. Hence, modern BCI systems rely on a calibration session, during which the user is instructed to perform specific tasks. The result of this calibration recording is a labelled data-set that can be used to train the (supervised) machine learning algorithm. Such a calibration recording is, however, of no direct use for the end user. Hence, it is especially important for patients to limit this tedious process. For this reason, the BCI community has invested a lot of effort in reducing the dependency on calibration data. Nevertheless, despite these efforts, true zero-training BCIs are rather rare. Event-Related Potential based spellers One of the most common types of BCI is the Event-Related Potentials (ERP) based BCI, which was invented by Farwell and Donchin in 1988. In the ERP-BCI, actions, such as spelling a letter, are coupled to specific stimuli. The computer continuously presents these stimuli to the user. By attending a specific stimulus, the user is able to select an action. More concretely, in the original ERP-BCI, these stimuli were the intensifications of rows and column in a matrix of symbols on a computer screen. By detecting which row and which column elicit an ERP response, the computer can infer which symbol the user wants to spell. Initially, the ERP-BCI was aimed at restoring communication, but novel applications have been proposed too. Examples are web browsing, gaming, navigation and painting. Additionally, current BCIs are not limited to using visual stimuli, but variations using auditory or tactile stimuli have been developed as well. In their quest to improve decoding performance in the ERP-BCI, the BCI community has developed increasingly more complex machine learning algorithms. However, nearly all of them rely on intensive subject-specific fine-tuning. The current generation of decoders has gone beyond a standard ERP classifier and they incorporate language models, which are similar to a spelling corrector on a computer, and extensions to speed up the communication, commonly referred to as dynamic stopping. Typically, all these different components are separate entities that have to be tied together by heuristics. This introduces an additional layer of complexity and the result is that these state of the art methods are difficult to optimise due to the large number of free parameters. We have proposed a single unified probabilistic model that integrates language models and a natural dynamic stopping strategy. This coherent model is able to achieve state of the art performance, while at the same time, minimising the complexity of subject-specific tuning on labelled data. A second and major contribution of this thesis is the development of the first unsupervised decoder for ERP spellers. Recall that typical decoders have to be tuned on labelled data for each user individually. Moreover, recording this labelled data is a tedious process, which has no direct use for the end user. The unsupervised approach, which is an extension of our unified probabilistic model, is able to learn how to decode a novel user’s brain signals without requiring such a labelled dataset. Instead, the user starts using the system and in the meantime the decoder is learning how to decode the brain signals. This method has been evaluated extensively, both in an online and offline setting. Our offline validation was executed on three different datasets of visual ERP data in the standard matrix speller. Combined, these datasets contain 25 different subjects. Additionally, we present the results of an offline evaluation on auditory ERP data from 21 subjects. Due to a less clear signal, this auditory ERP data present an even greater challenge than visual ERP data. On top of that we present the results from an online study on auditory ERP, which was conducted in cooperation with Michael Tangermann, Martijn Schreuder and Klaus-Robert Müller at the TU-Berlin. Our simulations indicate that when enough unlabelled data is available, the unsupervised method can compete with state of the art supervised approaches. Furthermore, when non-stationarity is present in the EEG recordings, e.g. due to fatigue during longer experiments, then the unsupervised approach can outperform supervised methods by adapting to these changes in the data. However, the limitation of the unsupervised method lies in the fact that while labelled data is not required, a substantial amount of unlabelled data must be processed before a reliable model can be found. Hence, during online experiments the model suffers from a warm-up period. During this warm-up period, the output is unreliable, but the mistakes made during this warm-up period can be corrected automatically when enough data is processed. To maximise the usability of ERP-BCI, the warm-up of the unsupervised method has to be minimised. For this reason, we propose one of the first transfer learning methods for ERP-BCI. The idea behind transfer learning is to share information on how to decode the brain signals between users. The concept of transfer learning stands in stark contrast with the strong tradition of subject-specific decoders commonly used by the BCI community. Nevertheless, by extending our unified model with inter-subject transfer learning, we are able to build a decoder that can decode the brain signals of novel users without any subject-specific training. Unfortunately, basic transfer learning models do perform as well as subject-specific (supervised models). For this reason, we have combined our transfer learning approach with our unsupervised learning approach to adapt it during usage to a highly accurate subject-specific model. Analogous to our unsupervised model, we have performed an extensive evaluation of transfer learning with unsupervised adaptation. We tested the model offline on visual ERP data from 22 subjects and on auditory ERP data from 21 subjects. Additionally, we present the results from an online study, which was also performed at the TUBerlin, where we evaluate transfer learning online on the auditory AMUSE paradigm. From these experiments, we can conclude that transfer learning in combination with unsupervised adaptation results in a true zero training BCI, that can compete with state of the art supervised models, without needing a single data point from a calibration recording. This method allows us to build a BCI that works out of the box
    • …
    corecore