86 research outputs found
Non-negative matrix factorization for single-channel EEG artifact rejection
International audienceNew applications of Electroencephalographic recording (EEG) pose new challenges in terms of artifact removal. In our work we target applications where the EEG is to be captured by a single electrode and a number of additional lightweight sensors are allowed. Thus, this paper introduces a new method for artifact removal for single-channel EEG recordings using nonnegative matrix factorisation (NMF) in a Gaussian source separation framework. We focus the study on ocular artifacts and show that by properly exploiting prior information on the latter, through the analysis of electrooculographic recordings, our artifact removal results on single-channel EEG are comparable to the results obtained with the classic multi-channel Independent Component Analysis technique
Enhancing brain-computer interfacing through advanced independent component analysis techniques
A Brain-computer interface (BCI) is a direct communication system between a brain
and an external device in which messages or commands sent by an individual do not
pass through the brain’s normal output pathways but is detected through brain signals.
Some severe motor impairments, such as Amyothrophic Lateral Sclerosis, head
trauma, spinal injuries and other diseases may cause the patients to lose their muscle
control and become unable to communicate with the outside environment. Currently
no effective cure or treatment has yet been found for these diseases. Therefore using a
BCI system to rebuild the communication pathway becomes a possible alternative
solution. Among different types of BCIs, an electroencephalogram (EEG) based BCI
is becoming a popular system due to EEG’s fine temporal resolution, ease of use,
portability and low set-up cost. However EEG’s susceptibility to noise is a major
issue to develop a robust BCI. Signal processing techniques such as coherent
averaging, filtering, FFT and AR modelling, etc. are used to reduce the noise and
extract components of interest. However these methods process the data on the
observed mixture domain which mixes components of interest and noise. Such a
limitation means that extracted EEG signals possibly still contain the noise residue or
coarsely that the removed noise also contains part of EEG signals embedded.
Independent Component Analysis (ICA), a Blind Source Separation (BSS)
technique, is able to extract relevant information within noisy signals and separate the
fundamental sources into the independent components (ICs). The most common
assumption of ICA method is that the source signals are unknown and statistically
independent. Through this assumption, ICA is able to recover the source signals.
Since the ICA concepts appeared in the fields of neural networks and signal
processing in the 1980s, many ICA applications in telecommunications, biomedical
data analysis, feature extraction, speech separation, time-series analysis and data
mining have been reported in the literature. In this thesis several ICA techniques are
proposed to optimize two major issues for BCI applications: reducing the recording
time needed in order to speed up the signal processing and reducing the number of
recording channels whilst improving the final classification performance or at least
with it remaining the same as the current performance. These will make BCI a more
practical prospect for everyday use.
This thesis first defines BCI and the diverse BCI models based on different
control patterns. After the general idea of ICA is introduced along with some
modifications to ICA, several new ICA approaches are proposed. The practical work
in this thesis starts with the preliminary analyses on the Southampton BCI pilot
datasets starting with basic and then advanced signal processing techniques. The
proposed ICA techniques are then presented using a multi-channel event related
potential (ERP) based BCI. Next, the ICA algorithm is applied to a multi-channel
spontaneous activity based BCI. The final ICA approach aims to examine the
possibility of using ICA based on just one or a few channel recordings on an ERP
based BCI.
The novel ICA approaches for BCI systems presented in this thesis show that ICA
is able to accurately and repeatedly extract the relevant information buried within
noisy signals and the signal quality is enhanced so that even a simple classifier can
achieve good classification accuracy. In the ERP based BCI application, after multichannel
ICA the data just applied to eight averages/epochs can achieve 83.9%
classification accuracy whilst the data by coherent averaging can reach only 32.3%
accuracy. In the spontaneous activity based BCI, the use of the multi-channel ICA
algorithm can effectively extract discriminatory information from two types of singletrial
EEG data. The classification accuracy is improved by about 25%, on average,
compared to the performance on the unpreprocessed data. The single channel ICA
technique on the ERP based BCI produces much better results than results using the
lowpass filter. Whereas the appropriate number of averages improves the signal to
noise rate of P300 activities which helps to achieve a better classification. These
advantages will lead to a reliable and practical BCI for use outside of the clinical
laboratory
Individual differences in supra-threshold auditory perception - mechanisms and objective correlates
Thesis (Ph.D.)--Boston UniversityTo extract content and meaning from a single source of sound in a quiet background, the auditory system can use a small subset of a very redundant set of spectral and temporal features. In stark contrast, communication in a complex, crowded scene places enormous demands on the auditory system. Spectrotemporal overlap between sounds reduces modulations in the signals at the ears and causes masking, with problems exacerbated by reverberation. Consistent with this idea, many patients seeking audiological treatment seek help precisely because they notice difficulties in environments requiring auditory selective attention. In the laboratory, even listeners with normal hearing thresholds exhibit vast differences in the ability to selectively attend to a target. Understanding the mechanisms causing these supra-threshold differences, the focus of this thesis, may enable research that leads to advances in treating communication disorders that affect an estimated one in five Americans.
Converging evidence from human and animal studies points to one potential source of these individual differences: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Electrophysiological measures of sound encoding by the auditory brainstem in humans and animals support the idea that the temporal precision of the early auditory neural representation can be poor even when hearing thresholds are normal. Concomitantly, animal studies show that noise exposure and early aging can cause a loss (cochlear neuropathy) of a large percentage of the afferent population of auditory nerve fibers innervating the cochlear hair cells without any significant change in measured audiograms.
Using behavioral, otoacoustic and electrophysiological measures in conjunction with computational models of sound processing by the auditory periphery and brainstem, a detailed examination of temporal coding of supra-threshold sound is carried out, focusing on characterizing and understanding individual differences in listeners with normal hearing thresholds and normal cochlear mechanical function. Results support the hypothesis that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests as deficits both behaviorally and in subcortical electrophysiological measures in humans. Based on these results, electrophysiological measures are developed that may yield sensitive, fast, objective measures of supra-threshold coding deficits that arise as a result of cochlear neuropathy
AutoEPG: software for the analysis of electrical activity in the microcircuit underpinning feeding behaviour of caenorhabditis elegans
BackgroundThe pharyngeal microcircuit of the nematode Caenorhabditis elegans serves as a model for analysing neural network activity and is amenable to electrophysiological recording techniques. One such technique is the electropharyngeogram (EPG) which has provided insight into the genetic basis of feeding behaviour, neurotransmission and muscle excitability. However, the detailed manual analysis of the digital recordings necessary to identify subtle differences in activity that reflect modulatory changes within the underlying network is time consuming and low throughput. To address this we have developed an automated system for the high-throughput and discrete analysis of EPG recordings (AutoEPG).Methodology/Principal FindingsAutoEPG employs a tailor made signal processing algorithm that automatically detects different features of the EPG signal including those that report on the relaxation and contraction of the muscle and neuronal activity. Manual verification of the detection algorithm has demonstrated AutoEPG is capable of very high levels of accuracy. We have further validated the software by analysing existing mutant strains with known pharyngeal phenotypes detectable by the EPG. In doing so, we have more precisely defined an evolutionarily conserved role for the calcium-dependent potassium channel, SLO-1, in modulating the rhythmic activity of neural networks.Conclusions/SignificanceAutoEPG enables the consistent analysis of EPG recordings, significantly increases analysis throughput and allows the robust identification of subtle changes in the electrical activity of the pharyngeal nervous system. It is anticipated that AutoEPG will further add to the experimental tractability of the C. elegans pharynx as a model neural circuit
Informatics for EEG biomarker discovery in clinical neuroscience
Neurological and developmental disorders (NDDs) impose an enormous burden of disease on children throughout the world. Two of the most common are autism spectrum disorder (ASD) and epilepsy. ASD has recently been estimated to affect 1 in 68 children, making it the most common neurodevelopmental disorder in children. Epilepsy is also a spectrum disorder that follows a developmental trajectory, with an estimated prevalence of 1%, nearly as common as autism. ASD and epilepsy co-occur in approximately 30% of individuals with a primary diagnosis of either disorder. Although considered to be different disorders, the relatively high comorbidity suggests the possibility of common neuropathological mechanisms.
Early interventions for NDDs lead to better long-term outcomes. But early intervention is predicated on early detection. Behavioral measures have thus far proven ineffective in detecting autism before about 18 months of age, in part because the behavioral repertoire of infants is so limited. Similarly, no methods for detecting emerging epilepsy before seizures begin are currently known. Because atypical brain development is likely to precede overt behavioral manifestations by months or even years, a critical developmental window for early intervention may be opened by the discovery of brain based biomarkers.
Analysis of brain activity with EEG may be under-utilized for clinical applications, especially for neurodevelopment. The hypothesis investigated in this dissertation is that new methods of nonlinear signal analysis, together with methods from biomedical informatics, can extract information from EEG data that enables detection of atypical neurodevelopment. This is tested using data collected at Boston Children’s Hospital. Several results are presented. First, infants with a family history of ASD were found to have EEG features that may enable autism to be detected as early as 9 months. Second, significant EEG-based differences were found between children with absence epilepsy, ASD and control groups using short 30-second EEG segments. Comparison of control groups using different EEG equipment supported the claim that EEG features could be computed that were independent of equipment and lab conditions. Finally, the potential for this technology to help meet the clinical need for neurodevelopmental screening and monitoring in low-income regions of the world is discussed
Analyse hiérarchique d'images multimodales
There is a growing interest in the development of adapted processing tools for multimodal images (several images acquired over the same scene with different characteristics). Allowing a more complete description of the scene, multimodal images are of interest in various image processing fields, but their optimal handling and exploitation raise several issues. This thesis extends hierarchical representations, a powerful tool for classical image analysis and processing, to multimodal images in order to better exploit the additional information brought by the multimodality and improve classical image processing techniques. %when applied to real applications. This thesis focuses on three different multimodalities frequently encountered in the remote sensing field. We first investigate the spectral-spatial information of hyperspectral images. Based on an adapted construction and processing of the hierarchical representation, we derive a segmentation which is optimal with respect to the spectral unmixing operation. We then focus on the temporal multimodality and sequences of hyperspectral images. Using the hierarchical representation of the frames in the sequence, we propose a new method to achieve object tracking and apply it to chemical gas plume tracking in thermal infrared hyperspectral video sequences. Finally, we study the sensorial multimodality, being images acquired with different sensors. Relying on the concept of braids of partitions, we propose a novel methodology of image segmentation, based on an energetic minimization framework.Il y a un intérêt grandissant pour le développement d’outils de traitements adaptés aux images multimodales (plusieurs images de la même scène acquises avec différentes caractéristiques). Permettant une représentation plus complète de la scène, ces images multimodales ont de l'intérêt dans plusieurs domaines du traitement d'images, mais les exploiter et les manipuler de manière optimale soulève plusieurs questions. Cette thèse étend les représentations hiérarchiques, outil puissant pour le traitement et l’analyse d’images classiques, aux images multimodales afin de mieux exploiter l’information additionnelle apportée par la multimodalité et améliorer les techniques classiques de traitement d’images. Cette thèse se concentre sur trois différentes multimodalités fréquemment rencontrées dans le domaine de la télédétection. Nous examinons premièrement l’information spectrale-spatiale des images hyperspectrales. Une construction et un traitement adaptés de la représentation hiérarchique nous permettent de produire une carte de segmentation de l'image optimale vis-à -vis de l'opération de démélange spectrale. Nous nous concentrons ensuite sur la multimodalité temporelle, traitant des séquences d’images hyperspectrales. En utilisant les représentations hiérarchiques des différentes images de la séquence, nous proposons une nouvelle méthode pour effectuer du suivi d’objet et l’appliquons au suivi de nuages de gaz chimique dans des séquences d’images hyperspectrales dans le domaine thermique infrarouge. Finalement, nous étudions la multimodalité sensorielle, c’est-à -dire les images acquises par différents capteurs. Nous appuyant sur le concept des tresses de partitions, nous proposons une nouvelle méthodologie de segmentation se basant sur un cadre de minimisation d’énergie
The Internet of Everything
In the era before IoT, the world wide web, internet, web 2.0 and social media made people’s lives comfortable by providing web services and enabling access personal data irrespective of their location. Further, to save time and improve efficiency, there is a need for machine to machine communication, automation, smart computing and ubiquitous access to personal devices. This need gave birth to the phenomenon of Internet of Things (IoT) and further to the concept of Internet of Everything (IoE)
- …