311 research outputs found
Data-driven multivariate and multiscale methods for brain computer interface
This thesis focuses on the development of data-driven multivariate and multiscale methods
for brain computer interface (BCI) systems. The electroencephalogram (EEG), the
most convenient means to measure neurophysiological activity due to its noninvasive nature,
is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its
multichannel recording nature require a new set of data-driven multivariate techniques to
estimate more accurately features for enhanced BCI operation. Also, a long term goal
is to enable an alternative EEG recording strategy for achieving long-term and portable
monitoring.
Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully
data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary
EEG signal into a set of components which are highly localised in time and frequency. It
is shown that the complex and multivariate extensions of EMD, which can exploit common
oscillatory modes within multivariate (multichannel) data, can be used to accurately
estimate and compare the amplitude and phase information among multiple sources, a
key for the feature extraction of BCI system. A complex extension of local mean decomposition
is also introduced and its operation is illustrated on two channel neuronal
spike streams. Common spatial pattern (CSP), a standard feature extraction technique
for BCI application, is also extended to complex domain using the augmented complex
statistics. Depending on the circularity/noncircularity of a complex signal, one of the
complex CSP algorithms can be chosen to produce the best classification performance
between two different EEG classes.
Using these complex and multivariate algorithms, two cognitive brain studies are
investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user
attention to a sound source among a mixture of sound stimuli, which is aimed at improving
the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments
elicited by taste and taste recall are examined to determine the pleasure and displeasure
of a food for the implementation of affective computing. The separation between two
emotional responses is examined using real and complex-valued common spatial pattern
methods.
Finally, we introduce a novel approach to brain monitoring based on EEG recordings
from within the ear canal, embedded on a custom made hearing aid earplug. The new
platform promises the possibility of both short- and long-term continuous use for standard
brain monitoring and interfacing applications
Recommended from our members
Protein disorder-to-order transition enhances the nucleosome-binding affinity of H1.
Intrinsically disordered proteins are crucial elements of chromatin heterogenous organization. While disorder in the histone tails enables a large variation of inter-nucleosome arrangements, disorder within the chromatin-binding proteins facilitates promiscuous binding to a wide range of different molecular targets, consistent with structural heterogeneity. Among the partially disordered chromatin-binding proteins, the H1 linker histone influences a myriad of chromatin characteristics including compaction, nucleosome spacing, transcription regulation, and the recruitment of other chromatin regulating proteins. Although it is now established that the long C-terminal domain (CTD) of H1 remains disordered upon nucleosome binding and that such disorder favours chromatin fluidity, the structural behaviour and thereby the role/function of the N-terminal domain (NTD) within chromatin is yet unresolved. On the basis of microsecond-long parallel-tempering metadynamics and temperature-replica exchange atomistic molecular dynamics simulations of different H1 NTD subtypes, we demonstrate that the NTD is completely unstructured in solution but undergoes an important disorder-to-order transition upon nucleosome binding: it forms a helix that enhances its DNA binding ability. Further, we show that the helical propensity of the H1 NTD is subtype-dependent and correlates with the experimentally observed binding affinity of H1 subtypes, suggesting an important functional implication of this disorder-to-order transition
Infrared face recognition: a comprehensive review of methodologies and databases
Automatic face recognition is an area with immense practical potential which
includes a wide range of commercial and law enforcement applications. Hence it
is unsurprising that it continues to be one of the most active research areas
of computer vision. Even after over three decades of intense research, the
state-of-the-art in face recognition continues to improve, benefitting from
advances in a range of different research fields such as image processing,
pattern recognition, computer graphics, and physiology. Systems based on
visible spectrum images, the most researched face recognition modality, have
reached a significant level of maturity with some practical success. However,
they continue to face challenges in the presence of illumination, pose and
expression changes, as well as facial disguises, all of which can significantly
decrease recognition accuracy. Amongst various approaches which have been
proposed in an attempt to overcome these limitations, the use of infrared (IR)
imaging has emerged as a particularly promising research direction. This paper
presents a comprehensive and timely review of the literature on this subject.
Our key contributions are: (i) a summary of the inherent properties of infrared
imaging which makes this modality promising in the context of face recognition,
(ii) a systematic review of the most influential approaches, with a focus on
emerging common trends as well as key differences between alternative
methodologies, (iii) a description of the main databases of infrared facial
images available to the researcher, and lastly (iv) a discussion of the most
promising avenues for future research.Comment: Pattern Recognition, 2014. arXiv admin note: substantial text overlap
with arXiv:1306.160
A study of information-theoretic metaheuristics applied to functional neuroimaging datasets
This dissertation presents a new metaheuristic related to a two-dimensional ensemble empirical mode decomposition (2DEEMD). It is based on Greenâs functions and is called Greenâs Function in Tension - Bidimensional Empirical Mode Decomposition (GiT-BEMD). It is employed for decomposing and extracting hidden information of images. A natural image (face image) as well as images with artificial textures have been used to test and validate
the proposed approach. Images are selected to demonstrate efficiency and performance of the GiT-BEMD algorithm in extracting textures on various spatial scales from the different images. In addition, a comparison of the performance of the new algorithm GiT-BEMD with a canonical BEEMD is discussed. Then, GiT-BEMD as well as canonical bidimensional EEMD (BEEMD) are applied to an fMRI study of a contour integration task. Thus, it explores the potential of employing GiT-BEMD to extract such textures, so-called bidimensional intrinsic mode functions (BIMFs), of functional biomedical images. Because of the enormous computational load and the artifacts accompanying the extracted textures when using a canonical BEEMD, GiT-BEMD is developed to cope with such challenges. It is seen that the computational cost is decreased dramatically, and the quality of the extracted textures is enhanced considerably. Consequently, GiT-BEMD achieves a higher quality of the estimated BIMFs as can be seen from a direct comparison of the results obtained with different variants of BEEMD and GiT-BEMD. Moreover, results generated by 2DBEEMD, especially in case of GiT-BEMD, distinctly show a superior precision in spatial localization of activity blobs when compared with a canonical general linear model (GLM) analysis employing statistical parametric mapping (SPM). Furthermore, to identify most informative textures, i.e. BIMFs, a support vector machine (SVM) as well as a random forest (RF) classifier is employed. Classification performance demonstrates the potential of the extracted BIMFs in supporting decision making of the classifier. With GiT-BEMD, the classification performance improved significantly which might also be a consequence of a clearer structure for these modes compared to the ones obtained with canonical BEEMD. Altogether, there is strong believe that the newly proposed metaheuristic GiT-BEMD offers a highly competitive alternative to existing BEMD algorithms and represents a promising technique for blindly decomposing images and extracting textures thereof which may be used for further analysis
Discovering and exploiting hidden pockets at protein interfaces
The number of three-dimensional structures of potential protein targets
available in several platforms such as the Protein Data Bank is subjected to a
constant increase over the last decades. This observation should be an additional
motivation to use structure-based methodologies in drug discovery. In the recent
years, different success stories of Structure Based Drug Design approach have
been reported. However, it has also been shown that a lack of druggability is
one of the major causes of failure in the development of a new compound.The
concept of druggability can be used to describe proteins with the capability to
bind drug-like compounds. A general consensus suggests that around 10% of
the human genome codes for molecular targets that can be considered as druggable.
Over the years, the protein druggability was studied with a particular
interest to capture structural descriptors in order to develop computational
methodologies for druggability assessment. Different computational methods
have been published to detect and evaluate potential binding sites at protein
surfaces. The majority of methods currently available are designed to assess
druggability of a static structure. However it is well known that sometimes a few
local rearrangements around the binding site can profoundly influence the affinity
of a small molecule to its target. The use of techniques such as molecular dynamics
(MD) or Metadynamics could be an interesting way to simulate those variations.
The goal of this thesis was to design a new computational approach, called
JEDI, for druggability assessment using a combination of empirical descriptors
that can be collected âon-the-flyâ during MD simulations. JEDI is a grid-based
approach able to perform the druggability assessment of a binding site in only a
few seconds making it one of the fastest methodologies in the field. Agreement
between computed and experimental druggability estimates is comparable to
literature alternatives. In addition, the estimator is less sensitive than existing
methodologies to small structural rearrangements and gives consistent druggability
predictions for similar structures of the same protein. Since the JEDI function is
continuous and differentiable, the druggability potential can be used as collective
variable to rapidly detect cryptic druggable binding sites in proteins with a variety
of MD free energy methods
High Performance Video Stream Analytics System for Object Detection and Classification
Due to the recent advances in cameras, cell phones and camcorders, particularly the resolution at which they can record an image/video, large amounts of data are generated daily. This video data is often so large that manually inspecting it for object detection and classification can be time consuming and error prone, thereby it requires automated analysis to extract useful
information and meta-data. The automated analysis from video streams also comes with numerous challenges such as blur content and variation in illumination conditions and poses. We investigate an automated video analytics system in this thesis which takes into account the characteristics from both shallow and deep learning domains. We propose fusion of features
from spatial frequency domain to perform highly accurate blur and illumination invariant object classification using deep learning networks. We also propose the tuning of hyper-parameters associated with the deep learning network through a mathematical model. The mathematical model used to support hyper-parameter tuning improved the performance of the proposed system during training. The outcomes of various hyper-parameters on system's performance are compared. The parameters that contribute towards the most optimal performance are selected for the video object classification. The proposed video analytics system has been demonstrated to process a large number of video streams and the underlying infrastructure is able to scale based on the number and size of the video stream(s) being processed. The extensive experimentation on publicly available image and video datasets reveal that the proposed system is significantly more accurate and scalable and can be used as a general purpose video analytics system.N/
Machine Learning Methods with Noisy, Incomplete or Small Datasets
In many machine learning applications, available datasets are sometimes incomplete, noisy or affected by artifacts. In supervised scenarios, it could happen that label information has low quality, which might include unbalanced training sets, noisy labels and other problems. Moreover, in practice, it is very common that available data samples are not enough to derive useful supervised or unsupervised classifiers. All these issues are commonly referred to as the low-quality data problem. This book collects novel contributions on machine learning methods for low-quality datasets, to contribute to the dissemination of new ideas to solve this challenging problem, and to provide clear examples of application in real scenarios
The Joint-Decision Trap: Lessons from German Federalism and European Integration
Compared to early expectations, the process of European integration has resulted in a paradox: frustration without disintegration and resilience without progress. The article attempts to develop an institutional explanation for this paradox by exploring the similarities between joint decision making (âPolitikverflechtungâ) in German federalism and decision making in the European Community. In both cases, it is argued, the fact that member governments are directly participating in central decisions, and that there is a de facto requirement of unanimous decisions, will systematically generate subâoptimal policy outcomes unless a âproblemâsolvingâ (as opposed to a âbargainingâ) style of decision making can be maintained. In fact, the âbargainingâ style has prevailed in both cases. The resulting pathologies of public policy have, however, not resulted either in successful strategies for the further Europeanization of policy responsibilities or in the disintegration of unsatisfactory jointâdecision systems. This âjointâdecision trapâ is explained by reference to the utility functions of member governments for whom present institutional arrangements, in spite of their subâoptimal policy output, seem to represent âlocal optimaâ when compared to either greater centralization or disintegration
The contextualization of the gospel of Jesus Christ among Bektashi Albanians
https://place.asburyseminary.edu/ecommonsatsdissertations/1675/thumbnail.jp
- âŠ