88 research outputs found
Recommended from our members
Artificial intelligence system for continuous affect estimation from naturalistic human expressions
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe analysis and automatic affect estimation system from human expression has been acknowledged as an active research topic in computer vision community. Most reported affect recognition systems, however, only consider subjects performing well-defined acted expression, in a very controlled condition, so they are not robust enough for real-life recognition tasks with subject variation, acoustic surrounding and illumination change. In this thesis, an artificial intelligence system is proposed to continuously (represented along a continuum e.g., from -1 to +1) estimate affect behaviour in terms of latent dimensions (e.g., arousal and valence) from naturalistic human expressions. To tackle the issues, feature representation and machine
learning strategies are addressed. In feature representation, human expression is represented by modalities such as audio, video, physiological signal and text modality. Hand- crafted features is extracted from each modality per frame, in order to match with consecutive affect label. However, the features extracted maybe missing information due to several factors such as background noise or lighting condition. Haar Wavelet Transform is employed to determine if noise cancellation mechanism in feature space should be considered in the design of affect estimation system. Other than hand-crafted features, deep learning features are also analysed in terms of the layer-wise; convolutional and fully connected layer. Convolutional Neural Network
such as AlexNet, VGGFace and ResNet has been selected as deep learning architecture to do feature extraction on top of facial expression images. Then, multimodal fusion scheme is applied by fusing deep learning feature and hand-crafted feature together to improve the performance. In machine learning strategies, two-stage regression approach is introduced. In the first stage, baseline regression methods such as Support Vector Regression are applied to estimate each affect per time. Then in the second stage, subsequent model such as Time Delay Neural Network, Long Short-Term Memory and Kalman Filter is proposed to model the
temporal relationships between consecutive estimation of each affect. In doing so, the temporal information employed by a subsequent model is not biased by high variability present in consecutive frame and at the same time, it allows the network to exploit the slow changing dynamic between emotional dynamic more efficiently. Following of two-stage regression approach for unimodal affect analysis, fusion information from different modalities is elaborated. Continuous emotion recognition in-the-wild is leveraged by investigating mathematical modelling for each emotion dimension. Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming are implemented to quantify the relationship between each modality. In summary, the research work presented in this thesis reveals a fundamental approach to automatically estimate affect value continuously from naturalistic human expression. The proposed system, which consists of feature smoothing, deep learning feature, two-stage regression framework and fusion using mathematical equation between modalities is demonstrated. It offers strong basis towards the development artificial intelligent system on estimation continuous affect estimation, and more broadly towards building a real-time
emotion recognition system for human-computer interaction.Majlis Amanah Rakyat (MARA), Malaysi
Model Order Reduction
An increasing complexity of models used to predict real-world systems leads to the need for algorithms to replace complex models with far simpler ones, while preserving the accuracy of the predictions. This three-volume handbook covers methods as well as applications. This third volume focuses on applications in engineering, biomedical engineering, computational physics and computer science
Dynamic Complexity and Causality Analysis of Scalp EEG for Detection of Cognitive Deficits
This dissertation explores the potential of scalp electroencephalography (EEG) for the detection and evaluation of neurological deficits due to moderate/severe traumatic brain injury (TBI), mild cognitive impairment (MCI), and early Alzheimer’s disease (AD). Neurological disorders often cannot be accurately diagnosed without the use of advanced imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). Non-quantitative task-based examinations are also used. None of these techniques, however, are typically performed in the primary care setting. Furthermore, the time and expense involved often deters physicians from performing them, leading to potential worse prognoses for patients.
If feasible, screening for cognitive deficits using scalp EEG would provide a fast, inexpensive, and less invasive alternative for evaluation of TBI post injury and detection of MCI and early AD. In this work various measures of EEG complexity and causality are explored as means of detecting cognitive deficits. Complexity measures include eventrelated Tsallis entropy, multiscale entropy, inter-regional transfer entropy delays, and regional variation in common spectral features, and graphical analysis of EEG inter-channel coherence. Causality analysis based on nonlinear state space reconstruction is explored in case studies of intensive care unit (ICU) signal reconstruction and detection of cognitive deficits via EEG reconstruction models. Significant contributions in this work include: (1) innovative entropy-based methods for analyzing event-related EEG data; (2) recommendations regarding differences in MCI/AD of common spectral and complexity features for different scalp regions and protocol conditions; (3) development of novel artificial neural network techniques for multivariate signal reconstruction; and (4) novel EEG biomarkers for detection of dementia
Essays on the nonlinear and nonstochastic nature of stock market data
The nature and structure of stock-market price dynamics is an area of ongoing and rigourous scientific debate. For almost three decades, most emphasis has been given on upholding the concepts of Market Efficiency and rational investment behaviour. Such an approach has favoured the development of numerous linear and nonlinear models mainly of stochastic foundations. Advances in mathematics have shown that nonlinear deterministic processes i.e. "chaos" can produce sequences that appear random to linear statistical techniques. Till recently, investment finance has been a science based on linearity and stochasticity. Hence it is important that studies of Market Efficiency include investigations of chaotic determinism and power laws. As far as chaos is concerned, there are rather mixed or inconclusive research results, prone with controversy. This inconclusiveness is attributed to two things: the nature of stock market time series, which are highly volatile and contaminated with a substantial amount of noise of largely unknown structure, and the lack of appropriate robust statistical testing procedures. In order to overcome such difficulties, within this thesis it is shown empirically and for the first time how one can combine novel techniques from recent chaotic and signal analysis literature, under a univariate time series analysis framework. Three basic methodologies are investigated: Recurrence analysis, Surrogate Data and Wavelet transforms. Recurrence Analysis is used to reveal qualitative and quantitative evidence of nonlinearity and nonstochasticity for a number of stock markets. It is then demonstrated how Surrogate Data, under a statistical hypothesis testing framework, can be simulated to provide similar evidence. Finally, it is shown how wavelet transforms can be applied in order to reveal various salient features of the market data and provide a platform for nonparametric regression and denoising. The results indicate that without the invocation of any parametric model-based assumptions, one can easily deduce that there is more to linearity and stochastic randomness in the data. Moreover, substantial evidence of recurrent patterns and aperiodicities is discovered which can be attributed to chaotic dynamics. These results are therefore very consistent with existing research indicating some types of nonlinear dependence in financial data. Concluding, the value of this thesis lies in its contribution to the overall evidence on Market Efficiency and chaotic determinism in financial markets. The main implication here is that the theory of equilibrium pricing in financial markets may need reconsideration in order to accommodate for the structures revealed
Recommended from our members
Advanced robust non-invasive foetal heart detection techniques during active labour using one pair of transabdominal electrodes
The thesis proposes and evaluates three state-of-the-art signal processing techniques to detect fetal heartbeats within each maternal cardiac cycle, during labour contractions, using only a pair of transabdominal electrodes. The first and second techniques are, namely, the structured third- order cumulant-slice-template matching and the bispectral-contours-template matching for fetal QRS identification, respectively. The third technique is based on the modified and appropriately weighted spectral multiple signal classification (MUSIC) with incorporated covariance matrix for uterine contraction noise-like interfering signals also contaminated with noise. Essentially, two modifications to the standard MUSIC have been developed in order to enhance the performance of the spectral estimator in our applied work. The first modification involves the introduction of an optimised weighting function to the segmented ECG covariance matrix, and is chiefly aimed at enhancing the fetal QRS major spectral peak which occurs at around 30 Hz against the mother QRS major spectral peak usually occurring around 17 Hz and all other noise contributions. Additional optional pseudo-bispectral enhancement to sharpen the maternal and fetal spectral peaks, in particular when the mother and fetal R-waves are temporally coincident, have been achieved. The second modification to the spectral MUSIC is the removal of the unjustified assumption that only white Gaussian noise is present and the incorporation of the actual measured labour uterine contraction covariance matrix in reconfigured subspace analysis. This inevitably leads to the generalised eigenvectors - eigenvalues decomposition modern signal processing. This is now coined the modified, interference incorporated pseudo-spectral MUSIC. The above mentioned first and second techniques are higher-order statistics-based (HOS) and hybrid involving both signal processing and NN classifiers. The third technique is second-order statistics-based (SOS). In all techniques, the removal of signal non-linearity with the aid of non-linear Volterra synthesisers plays a crucial part in the fetal detection integrity.
Accurately assessed fetal heart classification rates as high as 95% have been achieved during labour, thus helping to provide non-invasive transparency to fetal intrapartum welfare. Performance analysis and evaluation processes involved more than 30 critical cases classified as “fetal under stress in labour” recorded in a London hospital database and used both transbadominal ECG electrodes and fetal scalp electrodes. The latter facilitates detection of the instantaneous fetal heart rate which is then used as the Reference Fetal Heart Rate in the assessment of the classification rate of each of the above mentioned techniques. It will be shown that the fetal heartbeats are completely masked by uterine activity and noise artefacts in all the recorded transabdominal maternal ECG signals. The fetal scalp electrode was, therefore, deemed necessary to provide the highest accurate measure of fetal heart functionality (from the hospital viewpoint), and in the assessment of the three non-invasive techniques presented in this thesis. The techniques may also be used during gestation and as early as 10 weeks
Reconstruction of electric fields and source distributions in EEG brain imaging
In this thesis, three different approaches are developed for the estimation of focal brain activity using EEG measurements. The proposed approaches have been tested and found feasible using simulated data.
First, we develop a robust solver for the recovery of focal dipole sources. The solver uses a weighted dipole strength penalty term (also called weighted L1,2 norm) as prior information in order to ensure that the sources are sparse and focal, and that both the source orientation and depth bias are reduced. The solver is based on the truncated Newton interior point method combined with a logarithmic barrier method for the approximation of the penalty term. In addition, we use a Bayesian framework to derive the depth weights in the prior that are used to reduce the tendency of the solver to favor superficial sources.
In the second approach, vector field tomography (VFT) is used for the estimation of underlying electric fields inside the brain from external EEG measurements. The electric field is
reconstructed using a set of line integrals. This is the first time that VFT has been used for the
recovery of fields when the dipole source lies inside the domain of reconstruction. The benefit
of this approach is that we do not need a mathematical model for the sources. The test cases indicated that the approach can accurately localize the source activity.
In the last part of the thesis, we show that, by using the Bayesian approximation error approach (AEA), precise knowledge of the tissue conductivities and head geometry are not
always needed. We deliberately use a coarse head model and we take the typical variations
in the head geometry and tissue conductivities into account statistically in the inverse model.
We demonstrate that the AEA results are comparable to those obtained with an accurate head model.Open Acces
Registration Methods for Quantitative Imaging
At the core of most image registration problems is determining a spatial transformation that relates the physical coordinates of two or more images. Registration methods have become ubiquitous in many quantitative imaging applications. They represent an essential step for many biomedical and bioengineering applications. For example, image registration is a necessary step for removing motion and distortion related artifacts in serial images, for studying the variation of biological tissue properties, such as shape and composition, across different populations, and many other applications. Here fully automatic intensity based methods for image registration are reviewed within a global energy minimization framework. A linear, shift-invariant, stochastic model for the image formation process is used to describe several important aspects of typical implementations of image registration methods. In particular, we show that due to the stochastic nature of the image formation process, most methods for automatic image registration produce answers biased towards `blurred' images. In addition we show how image approximation and interpolation procedures necessary to compute the registered images can have undesirable effects on subsequent quantitative image analysis methods. We describe the exact sources of such artifacts and propose methods through which these can be mitigated. The newly proposed methodology is tested using both simulated and real image data. Case studies using three-dimensional diffusion weighted magnetic resonance images, diffusion tensor images, and two-dimensional optical images are presented. Though the specific examples shown relate exclusively to the fields of biomedical imaging and biomedical engineering, the methods described are general and should be applicable to a wide variety of imaging problems
- …