252 research outputs found
Nonlinear Hebbian learning as a unifying principle in receptive field formation
The development of sensory receptive fields has been modeled in the past by a
variety of models including normative models such as sparse coding or
independent component analysis and bottom-up models such as spike-timing
dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic
plasticity. Here we show that the above variety of approaches can all be
unified into a single common principle, namely Nonlinear Hebbian Learning. When
Nonlinear Hebbian Learning is applied to natural images, receptive field shapes
were strongly constrained by the input statistics and preprocessing, but
exhibited only modest variation across different choices of nonlinearities in
neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse
network activity are necessary for the development of localized receptive
fields. The analysis of alternative sensory modalities such as auditory models
or V2 development lead to the same conclusions. In all examples, receptive
fields can be predicted a priori by reformulating an abstract model as
nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural
statistics can account for many aspects of receptive field formation across
models and sensory modalities
Separating a Real-Life Nonlinear Image Mixture
When acquiring an image of a paper document, the image printed on the back page sometimes shows through. The mixture of the front- and back-page images thus obtained is markedly nonlinear, and thus constitutes a good real-life test case for nonlinear blind source separation.
This paper addresses a difficult version of this problem, corresponding to the use of "onion skin" paper, which results in a relatively strong nonlinearity of the mixture, which becomes close to singular in the lighter regions of the images. The separation is achieved through the MISEP technique, which is an extension of the well known INFOMAX method. The separation results are assessed with objective quality measures. They show an improvement over the results obtained with linear separation, but have room for further improvement
Independent Component Analysis in Spiking Neurons
Although models based on independent component analysis (ICA) have been successful in explaining various properties of sensory coding in the cortex, it remains unclear how networks of spiking neurons using realistic plasticity rules can realize such computation. Here, we propose a biologically plausible mechanism for ICA-like learning with spiking neurons. Our model combines spike-timing dependent plasticity and synaptic scaling with an intrinsic plasticity rule that regulates neuronal excitability to maximize information transmission. We show that a stochastically spiking neuron learns one independent component for inputs encoded either as rates or using spike-spike correlations. Furthermore, different independent components can be recovered, when the activity of different neurons is decorrelated by adaptive lateral inhibition
EEG filtering based on blind source separation (BSS) for early detection of Alzheimer's disease
Objective: Development of an EEG preprocessing technique for improvement of detection of Alzheimerâs disease (AD). The technique is based on filtering of EEG data using blind source separation (BSS) and projection of components which are possibly sensitive to cortical neuronal impairment found in early stages of AD. Method: Artifact-free 20 s intervals of raw resting EEG recordings from 22 patients with Mild Cognitive Impairment (MCI) who later proceeded to AD and 38 age-matched normal controls were decomposed into spatio-temporally decorrelated components using BSS algorithm âAMUSEâ. Filtered EEG was obtained by back projection of components with the highest linear predictability. Relative power of filtered data in delta, theta, alpha1, alpha2, beta1, and beta 2 bands were processed with Linear Discriminant Analysis (LDA). Results: Preprocessing improved the percentage of correctly classified patients and controls computed with jack-knifing cross-validation from 59 to 73% and from 76 to 84%, correspondingly. Conclusions: The proposed approach can significantly improve the sensitivity and specificity of EEG based diagnosis. Significance: Filtering based on BSS can improve the performance of the existing EEG approaches to early diagnosis of Alzheimerâs disease. It may also have potential for improvement of EEG classification in other clinical areas or fundamental research. The developed method is quite general and flexible, allowing for various extensions and improvements. q 2004 Published by Elsevier Ireland Ltd. on behalf of International Federation of Clinical Neurophysiology
Enhancing brain-computer interfacing through advanced independent component analysis techniques
A Brain-computer interface (BCI) is a direct communication system between a brain
and an external device in which messages or commands sent by an individual do not
pass through the brainâs normal output pathways but is detected through brain signals.
Some severe motor impairments, such as Amyothrophic Lateral Sclerosis, head
trauma, spinal injuries and other diseases may cause the patients to lose their muscle
control and become unable to communicate with the outside environment. Currently
no effective cure or treatment has yet been found for these diseases. Therefore using a
BCI system to rebuild the communication pathway becomes a possible alternative
solution. Among different types of BCIs, an electroencephalogram (EEG) based BCI
is becoming a popular system due to EEGâs fine temporal resolution, ease of use,
portability and low set-up cost. However EEGâs susceptibility to noise is a major
issue to develop a robust BCI. Signal processing techniques such as coherent
averaging, filtering, FFT and AR modelling, etc. are used to reduce the noise and
extract components of interest. However these methods process the data on the
observed mixture domain which mixes components of interest and noise. Such a
limitation means that extracted EEG signals possibly still contain the noise residue or
coarsely that the removed noise also contains part of EEG signals embedded.
Independent Component Analysis (ICA), a Blind Source Separation (BSS)
technique, is able to extract relevant information within noisy signals and separate the
fundamental sources into the independent components (ICs). The most common
assumption of ICA method is that the source signals are unknown and statistically
independent. Through this assumption, ICA is able to recover the source signals.
Since the ICA concepts appeared in the fields of neural networks and signal
processing in the 1980s, many ICA applications in telecommunications, biomedical
data analysis, feature extraction, speech separation, time-series analysis and data
mining have been reported in the literature. In this thesis several ICA techniques are
proposed to optimize two major issues for BCI applications: reducing the recording
time needed in order to speed up the signal processing and reducing the number of
recording channels whilst improving the final classification performance or at least
with it remaining the same as the current performance. These will make BCI a more
practical prospect for everyday use.
This thesis first defines BCI and the diverse BCI models based on different
control patterns. After the general idea of ICA is introduced along with some
modifications to ICA, several new ICA approaches are proposed. The practical work
in this thesis starts with the preliminary analyses on the Southampton BCI pilot
datasets starting with basic and then advanced signal processing techniques. The
proposed ICA techniques are then presented using a multi-channel event related
potential (ERP) based BCI. Next, the ICA algorithm is applied to a multi-channel
spontaneous activity based BCI. The final ICA approach aims to examine the
possibility of using ICA based on just one or a few channel recordings on an ERP
based BCI.
The novel ICA approaches for BCI systems presented in this thesis show that ICA
is able to accurately and repeatedly extract the relevant information buried within
noisy signals and the signal quality is enhanced so that even a simple classifier can
achieve good classification accuracy. In the ERP based BCI application, after multichannel
ICA the data just applied to eight averages/epochs can achieve 83.9%
classification accuracy whilst the data by coherent averaging can reach only 32.3%
accuracy. In the spontaneous activity based BCI, the use of the multi-channel ICA
algorithm can effectively extract discriminatory information from two types of singletrial
EEG data. The classification accuracy is improved by about 25%, on average,
compared to the performance on the unpreprocessed data. The single channel ICA
technique on the ERP based BCI produces much better results than results using the
lowpass filter. Whereas the appropriate number of averages improves the signal to
noise rate of P300 activities which helps to achieve a better classification. These
advantages will lead to a reliable and practical BCI for use outside of the clinical
laboratory
- âŠ