5 research outputs found

    Feature map size selection for fMRI classification on end-to-end deep convolutional neural networks

    Get PDF
    The emergence of convolutional neural networks (CNN) in various fields has also paved numerous ways for advancement in the field of medical imaging. This paper focuses on functional magnetic resonance imaging (fMRI) in the field of neuroimaging. It has high temporal resolution and robust to control or non-control subjects. CNN analysis on structural magnetic resonance imaging (MRI) and fMRI datasets is compared to rule out one of the grey areas in building CNNs for medical imaging analysis. This study focuses on the feature map size selection on fMRI datasets with CNNs where the selected sizes are evaluated for their performances. Although few outstanding studies on fMRI have been published, the availability of diverse previous studies on MRI previous works impulses us to study to learn the pattern of feature map sizes for CNN configuration. Six configurations are analyzed with prominent public fMRI dataset, names as Human Connectome Project (HCP). This dataset is widely used for any type of fMRI classification. With three set of data divisions, the accuracy values for validation set of fMRI classification are assessed and discussed. Despite the fact that only one slice of every 118 subjects' temporal brain images is used in the study, the validation of classification for three training-excluded subjects known as validation set, has proven the need for feature map size selection. This paper emphasizes the indispensable step of selecting the feature map sizes when designing CNN for fMRI classification. In addition, we provide proofs that validation set should consist of distinct subjects for definite evaluation of any model performance

    Framing Apache Spark in life sciences

    Get PDF
    Advances in high-throughput and digital technologies have required the adoption of big data for handling complex tasks in life sciences. However, the drift to big data led researchers to face technical and infrastructural challenges for storing, sharing, and analysing them. In fact, this kind of tasks requires distributed computing systems and algorithms able to ensure efficient processing. Cutting edge distributed programming frameworks allow to implement flexible algorithms able to adapt the computation to the data over on-premise HPC clusters or cloud architectures. In this context, Apache Spark is a very powerful HPC engine for large-scale data processing on clusters. Also thanks to specialised libraries for working with structured and relational data, it allows to support machine learning, graph-based computation, and stream processing. This review article is aimed at helping life sciences researchers to ascertain the features of Apache Spark and to assess whether it can be successfully used in their research activities

    Fast and scalable distributed deep convolutional autoencoder for fMRI big data analytics

    Full text link
    © 2018 Elsevier B.V. In recent years, analyzing task-based fMRI (tfMRI) data has become an essential tool for understanding brain function and networks. However, due to the sheer size of tfMRI data, its intrinsic complex structure, and lack of ground truth of underlying neural activities, modeling tfMRI data is hard and challenging. Previously proposed data modeling methods including Independent Component Analysis (ICA) and Sparse Dictionary Learning only provided shallow models based on blind source separation under the strong assumption that original fMRI signals could be linearly decomposed into time series components with corresponding spatial maps. Given the Convolutional Neural Network (CNN) successes in learning hierarchical abstractions from low-level data such as tfMRI time series, in this work we propose a novel scalable distributed deep CNN autoencoder model and apply it for fMRI big data analysis. This model aims to both learn the complex hierarchical structures of the tfMRI big data and to leverage the processing power of multiple GPUs in a distributed fashion. To deploy such a model, we have created an enhanced processing pipeline on the top of Apache Spark and Tensorflow, leveraging from a large cluster of GPU nodes over cloud. Experimental results from applying the model on the Human Connectome Project (HCP) data show that the proposed model is efficient and scalable toward tfMRI big data modeling and analytics, thus enabling data-driven extraction of hierarchical neuroscientific information from massive fMRI big data
    corecore