30 research outputs found

    AUTOMATED ARTIFACT REMOVAL AND DETECTION OF MILD COGNITIVE IMPAIRMENT FROM SINGLE CHANNEL ELECTROENCEPHALOGRAPHY SIGNALS FOR REAL-TIME IMPLEMENTATIONS ON WEARABLES

    Get PDF
    Electroencephalogram (EEG) is a technique for recording asynchronous activation of neuronal firing inside the brain with non-invasive scalp electrodes. EEG signal is well studied to evaluate the cognitive state, detect brain diseases such as epilepsy, dementia, coma, autism spectral disorder (ASD), etc. In this dissertation, the EEG signal is studied for the early detection of the Mild Cognitive Impairment (MCI). MCI is the preliminary stage of Dementia that may ultimately lead to Alzheimers disease (AD) in the elderly people. Our goal is to develop a minimalistic MCI detection system that could be integrated to the wearable sensors. This contribution has three major aspects: 1) cleaning the EEG signal, 2) detecting MCI, and 3) predicting the severity of the MCI using the data obtained from a single-channel EEG electrode. Artifacts such as eye blink activities can corrupt the EEG signals. We investigate unsupervised and effective removal of ocular artifact (OA) from single-channel streaming raw EEG data. Wavelet transform (WT) decomposition technique was systematically evaluated for effectiveness of OA removal for a single-channel EEG system. Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), is studied with four WT basis functions: haar, coif3, sym3, and bior4.4. The performance of the artifact removal algorithm was evaluated by the correlation coefficients (CC), mutual information (MI), signal to artifact ratio (SAR), normalized mean square error (NMSE), and time-frequency analysis. It is demonstrated that WT can be an effective tool for unsupervised OA removal from single channel EEG data for real-time applications.For the MCI detection from the clean EEG data, we collected the scalp EEG data, while the subjects were stimulated with five auditory speech signals. We extracted 590 features from the Event-Related Potential (ERP) of the collected EEG signals, which included time and spectral domain characteristics of the response. The top 25 features, ranked by the random forest method, were used for classification models to identify subjects with MCI. Robustness of our model was tested using leave-one-out cross-validation while training the classifiers. Best results (leave-one-out cross-validation accuracy 87.9%, sensitivity 84.8%, specificity 95%, and F score 85%) were obtained using support vector machine (SVM) method with Radial Basis Kernel (RBF) (sigma = 10, cost = 102). Similar performances were also observed with logistic regression (LR), further validating the results. Our results suggest that single-channel EEG could provide a robust biomarker for early detection of MCI. We also developed a single channel Electro-encephalography (EEG) based MCI severity monitoring algorithm by generating the Montreal Cognitive Assessment (MoCA) scores from the features extracted from EEG. We performed multi-trial and single-trail analysis for the algorithm development of the MCI severity monitoring. We studied Multivariate Regression (MR), Ensemble Regression (ER), Support Vector Regression (SVR), and Ridge Regression (RR) for multi-trial and deep neural regression for the single-trial analysis. In the case of multi-trial, the best result was obtained from the ER. In our single-trial analysis, we constructed the time-frequency image from each trial and feed it to the convolutional deep neural network (CNN). Performance of the regression models was evaluated by the RMSE and the residual analysis. We obtained the best accuracy with the deep neural regression method

    A Multiscale Approach for Statistical Characterization of Functional Images

    Get PDF
    Increasingly, scientific studies yield functional image data, in which the observed data consist of sets of curves recorded on the pixels of the image. Examples include temporal brain response intensities measured by fMRI and NMR frequency spectra measured at each pixel. This article presents a new methodology for improving the characterization of pixels in functional imaging, formulated as a spatial curve clustering problem. Our method operates on curves as a unit. It is nonparametric and involves multiple stages: (i) wavelet thresholding, aggregation, and Neyman truncation to effectively reduce dimensionality; (ii) clustering based on an extended EM algorithm; and (iii) multiscale penalized dyadic partitioning to create a spatial segmentation. We motivate the different stages with theoretical considerations and arguments, and illustrate the overall procedure on simulated and real datasets. Our method appears to offer substantial improvements over monoscale pixel-wise methods. An Appendix which gives some theoretical justifications of the methodology, computer code, documentation and dataset are available in the online supplements

    Model Based Principal Component Analysis with Application to Functional Magnetic Resonance Imaging.

    Full text link
    Functional Magnetic Resonance Imaging (fMRI) has allowed better understanding of human brain organization and function by making it possible to record either autonomous or stimulus induced brain activity. After appropriate preprocessing fMRI produces a large spatio-temporal data set, which requires sophisticated signal processing. The aim of the signal processing is usually to produce spatial maps of statistics that capture the effects of interest, e.g., brain activation, time delay between stimulation and activation, or connectivity between brain regions. Two broad signal processing approaches have been pursued; univoxel methods and multivoxel methods. This proposal will focus on multivoxel methods and review Principal Component Analysis (PCA), and other closely related methods, and describe their advantages and disadvantages in fMRI research. These existing multivoxel methods have in common that they are exploratory, i.e., they are not based on a statistical model. A crucial observation which is central to this thesis, is that there is in fact an underlying model behind PCA, which we call noisy PCA (nPCA). In the main part of this thesis, we use nPCA to develop methods that solve three important problems in fMRI. 1) We introduce a novel nPCA based spatio-temporal model that combines the standard univoxel regression model with nPCA and automatically recognizes the temporal smoothness of the fMRI data. Furthermore, unlike standard univoxel methods, it can handle non-stationary noise. 2) We introduce a novel sparse variable PCA (svPCA) method that automatically excludes whole voxel timeseries, and yields sparse eigenimages. This is achieved by a novel nonlinear penalized likelihood function which is optimized. An iterative estimation algorithm is proposed that makes use of geodesic descent methods. 3) We introduce a novel method based on Stein’s Unbiased Risk Estimator (SURE) and Random Matrix Theory (RMT) to select the number of principal components for the increasingly important case where the number of observations is of similar order as the number of variables.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/57638/2/mulfarss_1.pd

    Cloud removal from optical remote sensing images

    Full text link
    Optical remote sensing images used for Earth surface observations are constantly contaminated by cloud cover. Clouds dynamically affect the applications of optical data and increase the difficulty of image analysis. Therefore, cloud is considered as one of the sources of noise in optical image data, and its detection and removal need to be operated as a pre-processing step in most remote sensing image processing applications. This thesis investigates the current cloud detection and removal algorithms and develops three new cloud removal methods to improve the accuracy of the results. A thin cloud removal method based on signal transmission principles and spectral mixture analysis (ST-SMA) for pixel correction is developed in the first contribution. This method considers not only the additive reflectance from the clouds but also the energy absorption when solar radiation passes through them. Data correction is achieved by subtracting the product of the cloud endmember signature and the cloud abundance and rescaling according to the cloud thickness. The proposed method has no requirement for meteorological data and does not rely on reference images. The experimental results indicate that the proposed approach is able to perform effective removal of thin clouds in different scenarios. In the second study, an effective cloud removal method is proposed by taking advantage of the noise-adjusted principal components transform (CR-NAPCT). It is found that the signal-to-noise ratio (S/N) of cloud data is higher than data without cloud contamination, when spatial correlation is considered and are shown in the first NAPCT component (NAPC1) in the NAPCT data. An inverse transformation with a modified first component is then applied to generate the cloud free image. The effectiveness of the proposed method is assessed by performing experiments on simulated and real data to compare the quantitative and qualitative performance of the proposed approach. The third study of this thesis deals with both cloud and cloud shadow problems with the aid of an auxiliary image in a clear sky condition. A new cloud removal approach called multitemporal dictionary learning (MDL) is proposed. Dictionaries of the cloudy areas (target data) and the cloud free areas (reference data) are learned separately in the spectral domain. An online dictionary learning method is then applied to obtain the two dictionaries in this method. The removal process is conducted by using the coefficients from the reference image and the dictionary learned from the target image. This method is able to recover the data contaminated by thin and thick clouds or cloud shadows. The experimental results show that the MDL method is effective from both quantitative and qualitative viewpoints

    l0 Sparse signal processing and model selection with applications

    Full text link
    Sparse signal processing has far-reaching applications including compressed sensing, media compression/denoising/deblurring, microarray analysis and medical imaging. The main reason for its popularity is that many signals have a sparse representation given that the basis is suitably selected. However the difficulty lies in developing an efficient method of recovering such a representation. To this aim, two efficient sparse signal recovery algorithms are developed in the first part of this thesis. The first method is based on direct minimization of the l0 norm via cyclic descent, which is called the L0LS-CD (l0 penalized least squares via cyclic descent) algorithm. The other method minimizes smooth approximations of sparsity measures including those of the l0 norm via the majorization minimization (MM) technique, which is called the QC (quadratic concave) algorithm. The L0LS-CD algorithm is developed further by extending it to its multivariate (V-L0LS-CD (vector L0LS-CD)) and group (gL0LS-CD (group L0LS-CD)) regression variants. Computational speed-ups to the basic cyclic descent algorithm are discussed and a greedy version of L0LS-CD is developed. Stability of these algorithms is analyzed and the impact of the penalty parameter and proper initialization on the algorithm performance are highlighted. A suitable method for performance comparison of sparse approximating algorithms in the presence of noise is established. Simulations compare L0LS-CD and V-L0LS-CD with a range of alternatives on under-determined as well as over-determined systems. The QC algorithm is applicable to a class of penalties that are neither convex nor concave but have what we call the quadratic concave property. Convergence proofs of this algorithm are presented and it is compared with the Newton algorithm, concave convex (CC) procedure, as well as with the class of proximity algorithms. Simulations focus on the smooth approximations of the l0 norm and compare them with other l0 denoising algorithms. Next, two applications of sparse modeling are considered. In the first application the L0LS-CD algorithm is extended to recover a sparse transfer function in the presence of coloured noise. The second uses gL0LS-CD to recover the topology of a sparsely connected network of dynamic systems. Both applications use Laguerre basis functions for model expansion. The role of model selection in sparse signal processing is widely neglected in literature. The tuning/penalty parameter of a sparse approximating problem should be selected using a model selection criterion which minimizes a desired discrepancy measure. Compared to the commonly used model selection methods, the SURE (Stein's unbiased risk estimator) estimator stands out as one which does not suffer from the limitations of other methods. Most model selection criterion are developed based on signal or prediction mean squared error. The last section of this thesis develops an SURE criterion instead for parameter mean square error and applies this result to l1 penalized least squares problem with grouped variables. Simulations based on topology identification of a sparse network are presented to illustrate and compare with alternative model selection criteria
    corecore