28,710 research outputs found
Enhancing brain-computer interfacing through advanced independent component analysis techniques
A Brain-computer interface (BCI) is a direct communication system between a brain
and an external device in which messages or commands sent by an individual do not
pass through the brain’s normal output pathways but is detected through brain signals.
Some severe motor impairments, such as Amyothrophic Lateral Sclerosis, head
trauma, spinal injuries and other diseases may cause the patients to lose their muscle
control and become unable to communicate with the outside environment. Currently
no effective cure or treatment has yet been found for these diseases. Therefore using a
BCI system to rebuild the communication pathway becomes a possible alternative
solution. Among different types of BCIs, an electroencephalogram (EEG) based BCI
is becoming a popular system due to EEG’s fine temporal resolution, ease of use,
portability and low set-up cost. However EEG’s susceptibility to noise is a major
issue to develop a robust BCI. Signal processing techniques such as coherent
averaging, filtering, FFT and AR modelling, etc. are used to reduce the noise and
extract components of interest. However these methods process the data on the
observed mixture domain which mixes components of interest and noise. Such a
limitation means that extracted EEG signals possibly still contain the noise residue or
coarsely that the removed noise also contains part of EEG signals embedded.
Independent Component Analysis (ICA), a Blind Source Separation (BSS)
technique, is able to extract relevant information within noisy signals and separate the
fundamental sources into the independent components (ICs). The most common
assumption of ICA method is that the source signals are unknown and statistically
independent. Through this assumption, ICA is able to recover the source signals.
Since the ICA concepts appeared in the fields of neural networks and signal
processing in the 1980s, many ICA applications in telecommunications, biomedical
data analysis, feature extraction, speech separation, time-series analysis and data
mining have been reported in the literature. In this thesis several ICA techniques are
proposed to optimize two major issues for BCI applications: reducing the recording
time needed in order to speed up the signal processing and reducing the number of
recording channels whilst improving the final classification performance or at least
with it remaining the same as the current performance. These will make BCI a more
practical prospect for everyday use.
This thesis first defines BCI and the diverse BCI models based on different
control patterns. After the general idea of ICA is introduced along with some
modifications to ICA, several new ICA approaches are proposed. The practical work
in this thesis starts with the preliminary analyses on the Southampton BCI pilot
datasets starting with basic and then advanced signal processing techniques. The
proposed ICA techniques are then presented using a multi-channel event related
potential (ERP) based BCI. Next, the ICA algorithm is applied to a multi-channel
spontaneous activity based BCI. The final ICA approach aims to examine the
possibility of using ICA based on just one or a few channel recordings on an ERP
based BCI.
The novel ICA approaches for BCI systems presented in this thesis show that ICA
is able to accurately and repeatedly extract the relevant information buried within
noisy signals and the signal quality is enhanced so that even a simple classifier can
achieve good classification accuracy. In the ERP based BCI application, after multichannel
ICA the data just applied to eight averages/epochs can achieve 83.9%
classification accuracy whilst the data by coherent averaging can reach only 32.3%
accuracy. In the spontaneous activity based BCI, the use of the multi-channel ICA
algorithm can effectively extract discriminatory information from two types of singletrial
EEG data. The classification accuracy is improved by about 25%, on average,
compared to the performance on the unpreprocessed data. The single channel ICA
technique on the ERP based BCI produces much better results than results using the
lowpass filter. Whereas the appropriate number of averages improves the signal to
noise rate of P300 activities which helps to achieve a better classification. These
advantages will lead to a reliable and practical BCI for use outside of the clinical
laboratory
Review of analytical instruments for EEG analysis
Since it was first used in 1926, EEG has been one of the most useful
instruments of neuroscience. In order to start using EEG data we need not only
EEG apparatus, but also some analytical tools and skills to understand what our
data mean. This article describes several classical analytical tools and also
new one which appeared only several years ago. We hope it will be useful for
those researchers who have only started working in the field of cognitive EEG
Efficient Privacy Preserving Distributed Clustering Based on Secret Sharing
In this paper, we propose a privacy preserving distributed
clustering protocol for horizontally partitioned data based on a very efficient
homomorphic additive secret sharing scheme. The model we use
for the protocol is novel in the sense that it utilizes two non-colluding
third parties. We provide a brief security analysis of our protocol from
information theoretic point of view, which is a stronger security model.
We show communication and computation complexity analysis of our
protocol along with another protocol previously proposed for the same
problem. We also include experimental results for computation and communication
overhead of these two protocols. Our protocol not only outperforms
the others in execution time and communication overhead on
data holders, but also uses a more efficient model for many data mining
applications
A brief network analysis of Artificial Intelligence publication
In this paper, we present an illustration to the history of Artificial
Intelligence(AI) with a statistical analysis of publish since 1940. We
collected and mined through the IEEE publish data base to analysis the
geological and chronological variance of the activeness of research in AI. The
connections between different institutes are showed. The result shows that the
leading community of AI research are mainly in the USA, China, the Europe and
Japan. The key institutes, authors and the research hotspots are revealed. It
is found that the research institutes in the fields like Data Mining, Computer
Vision, Pattern Recognition and some other fields of Machine Learning are quite
consistent, implying a strong interaction between the community of each field.
It is also showed that the research of Electronic Engineering and Industrial or
Commercial applications are very active in California. Japan is also publishing
a lot of papers in robotics. Due to the limitation of data source, the result
might be overly influenced by the number of published articles, which is to our
best improved by applying network keynode analysis on the research community
instead of merely count the number of publish.Comment: 18 pages, 7 figure
Finding Young Stellar Populations in Elliptical Galaxies from Independent Components of Optical Spectra
Elliptical galaxies are believed to consist of a single population of old
stars formed together at an early epoch in the Universe, yet recent analyses of
galaxy spectra seem to indicate the presence of significant younger populations
of stars in them. The detailed physical modelling of such populations is
computationally expensive, inhibiting the detailed analysis of the several
million galaxy spectra becoming available over the next few years. Here we
present a data mining application aimed at decomposing the spectra of
elliptical galaxies into several coeval stellar populations, without the use of
detailed physical models. This is achieved by performing a linear independent
basis transformation that essentially decouples the initial problem of joint
processing of a set of correlated spectral measurements into that of the
independent processing of a small set of prototypical spectra. Two methods are
investigated: (1) A fast projection approach is derived by exploiting the
correlation structure of neighboring wavelength bins within the spectral data.
(2) A factorisation method that takes advantage of the positivity of the
spectra is also investigated. The preliminary results show that typical
features observed in stellar population spectra of different evolutionary
histories can be convincingly disentangled by these methods, despite the
absence of input physics. The success of this basis transformation analysis in
recovering physically interpretable representations indicates that this
technique is a potentially powerful tool for astronomical data mining.Comment: 12 Pages, 7 figures; accepted in SIAM 2005 International Conference
on Data Mining, Newport Beach, CA, April 200
Multivariate Approaches to Classification in Extragalactic Astronomy
Clustering objects into synthetic groups is a natural activity of any
science. Astrophysics is not an exception and is now facing a deluge of data.
For galaxies, the one-century old Hubble classification and the Hubble tuning
fork are still largely in use, together with numerous mono-or bivariate
classifications most often made by eye. However, a classification must be
driven by the data, and sophisticated multivariate statistical tools are used
more and more often. In this paper we review these different approaches in
order to situate them in the general context of unsupervised and supervised
learning. We insist on the astrophysical outcomes of these studies to show that
multivariate analyses provide an obvious path toward a renewal of our
classification of galaxies and are invaluable tools to investigate the physics
and evolution of galaxies.Comment: Open Access paper.
http://www.frontiersin.org/milky\_way\_and\_galaxies/10.3389/fspas.2015.00003/abstract\>.
\<10.3389/fspas.2015.00003 \&g
- …