3,447 research outputs found

    ROC evaluation of statistical wavelet-based analysis of brain activation in [15O]-H2O PET scans

    Get PDF
    This paper presents and evaluates a wavelet-based statistical analysis of PET images for the detection of brain activation areas. Brain regions showing significant activations were obtained by performing Student's t tests in the wavelet domain, reconstructing the final image from only those wavelet coefficients that passed the statistical test at a given significance level, and discarding artifacts introduced during the reconstruction process. Using Receiver Operating Characteristic (ROC) curves, we have compared this statistical analysis in the wavelet domain to the conventional image-domain Statistical Parametric Mapping (SPM) method. For obtaining an accurate assessment of sensitivity and specificity, we have simulated realistic single subject [15O]-H2O PET studies with different hyperactivation levels of the thalamic region. The results obtained from an ROC analysis show that the wavelet approach outperforms conventional SPM in identifying brain activation patterns. Using the wavelet method, activation areas detected were closer in size and shape to the region actually activated in the reference image.Publicad

    Stacked Convolutional Recurrent Auto-encoder for Noise Reduction in EEG

    Get PDF
    Electroencephalogram (EEG) can be used to record electrical potentials in the brain by attaching electrodes to the scalp. However, these low amplitude recordings are susceptible to noise which originates from several sources including ocular, pulse and muscle artefacts. Their presence has a severe impact on analysis and diagnoses of brain abnormalities. This research assessed the effectiveness of a stacked convolutional-recurrent auto-encoder (CR-AE) for noise reduction of EEG signal. Performance was evaluated using the signal-to-noise ratio (SNR) and peak signal-to-noise ratio (PSNR) in comparison to principal component analysis (PCA), independent component analysis (ICA) and a simple auto-encoder (AE). The Harrell-Davis quantile estimator was used to compare SNR and PSNR distributions of reconstructed and raw signals. It was found that the proposed CR-AE achieved a mean SNR of 5:53 db and signicantly increased the SNR across all quantiles for each channel compared to the state-of-the-art methods. However, though SNR increased PSNR did not and the proposed CR-AE was outperformed by each baseline across the majority of quantiles for all channels. In addition, though reconstruction error was very low none of the proposed CR-AE architectures could generalize to the second dataset

    A Better Looking Brain: Image Pre-Processing Approaches for fMRI Data

    Get PDF
    Researchers in the field of functional neuroimaging have faced a long standing problem in pre-processing low spatial resolution data without losing meaningful details within. Commonly, the brain function is recorded by a technique known as echo-planar imaging that represents the measure of blood flow (BOLD signal) through a particular location in the brain as an array of intensity values changing over time. This approach to record a movie of blood flow in the brain is known as fMRI. The neural activity is then studied from the temporal correlation patterns existing within the fMRI time series. However, the resulting images are noisy and contain low spatial detail, thus making it imperative to pre-process them appropriately to derive meaningful activation patterns. Two of the several standard preprocessing steps employed just before the analysis stage are denoising and normalization. Fundamentally, it is difficult to perfectly remove noise from an image without making assumptions about signal and noise distributions. A convenient and commonly used alternative is to smooth the image with a Gaussian filter, but this method suffers from various obvious drawbacks, primarily loss of spatial detail. A greater challenge arises when we attempt to derive average activation patterns from fMRI images acquired from a group of individuals. The brain of one individual differs from others in a structural sense as well as in a functional sense. Commonly, the inter-individual differences in anatomical structures are compensated for by co-registering each subject\u27s data to a common normalization space, known as spatial normalization. However, there are no existing methods to compensate for the differences in functional organization of the brain. This work presents first steps towards data-driven robust algorithms for fMRI image denoising and multi-subject image normalization by utilizing inherent information within fMRI data. In addition, a new validation approach based on spatial shape of the activation regions is presented to quantify the effects of preprocessing and also as a tool to record the differences in activation patterns between individual subjects or within two groups such as healthy controls and patients with mental illness. Qualititative and quantitative results of the proposed framework compare favorably against existing and widely used model-driven approaches such as Gaussian smoothing and structure-based spatial normalization. This work is intended to provide neuroscience researchers tools to derive more meaningful activation patterns to accurately identify imaging biomarkers for various neurodevelopmental diseases and also maximize the specificity of a diagnosis

    Noise Reduction of EEG Signals Using Autoencoders Built Upon GRU based RNN Layers

    Get PDF
    Understanding the cognitive and functional behaviour of the brain by its electrical activity is an important area of research. Electroencephalography (EEG) is a method that measures and record electrical activities of the brain from the scalp. It has been used for pathology analysis, emotion recognition, clinical and cognitive research, diagnosing various neurological and psychiatric disorders and for other applications. Since the EEG signals are sensitive to activities other than the brain ones, such as eye blinking, eye movement, head movement, etc., it is not possible to record EEG signals without any noise. Thus, it is very important to use an efficient noise reduction technique to get more accurate recordings. Numerous traditional techniques such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), wavelet transformations and machine learning techniques were proposed for reducing the noise in EEG signals. The aim of this paper is to investigate the effectiveness of stacked autoencoders built upon Gated Recurrent Unit (GRU) based Recurrent Neural Network (RNN) layers (GRU-AE) against PCA. To achieve this, Harrell-Davis decile values for the reconstructed signals’ signal-to- noise ratio distributions were compared and it was found that the GRU-AE outperformed PCA for noise reduction of EEG signals

    Methods for Joint Normalization and Comparison of Hi-C data

    Get PDF
    The development of chromatin conformation capture technology has opened new avenues of study into the 3D structure and function of the genome. Chromatin structure is known to influence gene regulation, and differences in structure are now emerging as a mechanism of regulation between, e.g., cell differentiation and disease vs. normal states. Hi-C sequencing technology now provides a way to study the 3D interactions of the chromatin over the whole genome. However, like all sequencing technologies, Hi-C suffers from several forms of bias stemming from both the technology and the DNA sequence itself. Several normalization methods have been developed for normalizing individual Hi-C datasets, but little work has been done on developing joint normalization methods for comparing two or more Hi-C datasets. To make full use of Hi-C data, joint normalization and statistical comparison techniques are needed to carry out experiments to identify regions where chromatin structure differs between conditions. We develop methods for the joint normalization and comparison of two Hi-C datasets, which we then extended to more complex experimental designs. Our normalization method is novel in that it makes use of the distance-dependent nature of chromatin interactions. Our modification of the Minus vs. Average (MA) plot to the Minus vs. Distance (MD) plot allows for a nonparametric data-driven normalization technique using loess smoothing. Additionally, we present a simple statistical method using Z-scores for detecting differentially interacting regions between two datasets. Our initial method was published as the Bioconductor R package HiCcompare [http://bioconductor.org/packages/HiCcompare/](http://bioconductor.org/packages/HiCcompare/). We then further extended our normalization and comparison method for use in complex Hi-C experiments with more than two datasets and optional covariates. We extended the normalization method to jointly normalize any number of Hi-C datasets by using a cyclic loess procedure on the MD plot. The cyclic loess normalization technique can remove between dataset biases efficiently and effectively even when several datasets are analyzed at one time. Our comparison method implements a generalized linear model-based approach for comparing complex Hi-C experiments, which may have more than two groups and additional covariates. The extended methods are also available as a Bioconductor R package [http://bioconductor.org/packages/multiHiCcompare/](http://bioconductor.org/packages/multiHiCcompare/). Finally, we demonstrate the use of HiCcompare and multiHiCcompare in several test cases on real data in addition to comparing them to other similar methods (https://doi.org/10.1002/cpbi.76)

    False Discovery Rate for Wavelet-Based Statistical Parametric Mapping

    Get PDF
    Model-based statistical analysis of functional magnetic resonance imaging (fMRI) data relies on the general linear model and statistical hypothesis testing. Due to the large number of intracranial voxels, it is important to deal with the multiple comparisons problem. Many fMRI analysis tools utilize Gaussian random field theory to obtain a more sensitive thresholding; this typically involves Gaussian smoothing as a preprocessing step. Wavelet-based statistical parametric mapping (WSPM) is an alternative method to obtain parametric maps from non-smoothed data. It relies on adaptive thresholding of the parametric maps in the wavelet domain, followed by voxel-wise statistical testing. The procedure is conservative; it uses Bonferroni correction for strong type I error control. Yet, its sensitivity is close to SPM's due to the excellent denoising properties of the wavelet transform. Here, we adapt the false discovery rate (FDR) principle to the WSPM framework. Although explicit p-values cannot be obtained, we show that it is possible to retrieve the FDR threshold by a simple iterative scheme. We then validate the approach with an event-related visual stimulation task. Our results show better sensitivity with preservation of spatial resolution; i.e., activation clusters align well with the gray matter structures in the visual cortex. The spatial resolution of the activation maps is even high enough to easily identify a voxel that is very likely to be caused by the draining-vein effect
    • …
    corecore