148 research outputs found

    Deep Cellular Recurrent Neural Architecture for Efficient Multidimensional Time-Series Data Processing

    Get PDF
    Efficient processing of time series data is a fundamental yet challenging problem in pattern recognition. Though recent developments in machine learning and deep learning have enabled remarkable improvements in processing large scale datasets in many application domains, most are designed and regulated to handle inputs that are static in time. Many real-world data, such as in biomedical, surveillance and security, financial, manufacturing and engineering applications, are rarely static in time, and demand models able to recognize patterns in both space and time. Current machine learning (ML) and deep learning (DL) models adapted for time series processing tend to grow in complexity and size to accommodate the additional dimensionality of time. Specifically, the biologically inspired learning based models known as artificial neural networks that have shown extraordinary success in pattern recognition, tend to grow prohibitively large and cumbersome in the presence of large scale multi-dimensional time series biomedical data such as EEG. Consequently, this work aims to develop representative ML and DL models for robust and efficient large scale time series processing. First, we design a novel ML pipeline with efficient feature engineering to process a large scale multi-channel scalp EEG dataset for automated detection of epileptic seizures. With the use of a sophisticated yet computationally efficient time-frequency analysis technique known as harmonic wavelet packet transform and an efficient self-similarity computation based on fractal dimension, we achieve state-of-the-art performance for automated seizure detection in EEG data. Subsequently, we investigate the development of a novel efficient deep recurrent learning model for large scale time series processing. For this, we first study the functionality and training of a biologically inspired neural network architecture known as cellular simultaneous recurrent neural network (CSRN). We obtain a generalization of this network for multiple topological image processing tasks and investigate the learning efficacy of the complex cellular architecture using several state-of-the-art training methods. Finally, we develop a novel deep cellular recurrent neural network (CDRNN) architecture based on the biologically inspired distributed processing used in CSRN for processing time series data. The proposed DCRNN leverages the cellular recurrent architecture to promote extensive weight sharing and efficient, individualized, synchronous processing of multi-source time series data. Experiments on a large scale multi-channel scalp EEG, and a machine fault detection dataset show that the proposed DCRNN offers state-of-the-art recognition performance while using substantially fewer trainable recurrent units

    Unsupervised Multivariate Time Series Clustering

    Get PDF
    Clustering is widely used in unsupervised machine learning to partition a given set of data into non-overlapping groups. Many real-world applications require processing more complex multivariate time series data characterized by more than one dependent variables. A few works in literature reported multivariate classification using Shapelet learning. However, the clustering of multivariate time series signals using Shapelet learning has not explored yet. Shapelet learning is a process of discovering those Shapelets which contain the most informative features of the time series signal. Discovering suitable Shapelets from many candidates Shapelet has been broadly studied for classification and clustering of univariate time series signals. Shapelet learning has shown promising results in the case of univariate time series analysis. The analysis of multivariate time series signals is not widely explored because of the dimensionality issue. This work proposes a generalized Shapelet learning method for unsupervised multivariate time series clustering. The proposed method utilizes spectral clustering and Shapelet similarity minimization with least square regularization to obtain the optimal Shapelets for unsupervised clustering. The proposed method is evaluated using an in-house multivariate time series dataset on detection of radio frequency (RF) faults in the Jefferson Labs Continuous Beam Accelerator Facility (CEBAF). The dataset constitutes of three-dimensional time series recordings of three RF fault types. The proposed method shows successful clustering performance with average value of a precision of 0.732, recall of 0.717, F-score of 0.732, a rand index (RI) score of 0.812 and normalize mutual information (NMI) of 0.56 with overall less than 3% standard deviation in a five-fold cross validation evaluation.https://digitalcommons.odu.edu/gradposters2021_engineering/1004/thumbnail.jp

    Context Aware Deep Learning for Brain Tumor Segmentation, Subtype Classification, and Survival Prediction Using Radiology Images

    Get PDF
    A brain tumor is an uncontrolled growth of cancerous cells in the brain. Accurate segmentation and classification of tumors are critical for subsequent prognosis and treatment planning. This work proposes context aware deep learning for brain tumor segmentation, subtype classification, and overall survival prediction using structural multimodal magnetic resonance images (mMRI). We first propose a 3D context aware deep learning, that considers uncertainty of tumor location in the radiology mMRI image sub-regions, to obtain tumor segmentation. We then apply a regular 3D convolutional neural network (CNN) on the tumor segments to achieve tumor subtype classification. Finally, we perform survival prediction using a hybrid method of deep learning and machine learning. To evaluate the performance, we apply the proposed methods to the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) dataset for tumor segmentation and overall survival prediction, and to the dataset of the Computational Precision Medicine Radiology-Pathology (CPM-RadPath) Challenge on Brain Tumor Classification 2019 for tumor classification. We also perform an extensive performance evaluation based on popular evaluation metrics, such as Dice score coefficient, Hausdorff distance at percentile 95 (HD95), classification accuracy, and mean square error. The results suggest that the proposed method offers robust tumor segmentation and survival prediction, respectively. Furthermore, the tumor classification results in this work is ranked at second place in the testing phase of the 2019 CPM-RadPath global challenge

    Deep Learning with Context Encoding for Semantic Brain Tumor Segmentation and Patient Survival Prediction

    Get PDF
    One of the most challenging problems encountered in deep learning-based brain tumor segmentation models is the misclassification of tumor tissue classes due to the inherent imbalance in the class representation. Consequently, strong regularization methods are typically considered when training large-scale deep learning models for brain tumor segmentation to overcome undue bias towards representative tissue types. However, these regularization methods tend to be computationally exhaustive, and may not guarantee the learning of features representing all tumor tissue types that exist in the input MRI examples. Recent work in context encoding with deep CNN models have shown promise for semantic segmentation of natural scenes, with particular improvements in small object segmentation due to improved representative feature learning. Accordingly, we propose a novel, efficient 3DCNN based deep learning framework with context encoding for semantic brain tumor segmentation using multimodal magnetic resonance imaging (mMRI). The context encoding module in the proposed model enforces rich, class-dependent feature learning to improve the overall multi-label segmentation performance. We subsequently utilize context augmented features in a machine-learning based survival prediction pipeline to improve the prediction performance. The proposed method is evaluated using the publicly available 2019 Brain Tumor Segmentation (BraTS) and survival prediction challenge dataset. The results show that the proposed method significantly improves the tumor tissue segmentation performance and the overall survival prediction performance

    Convolutional Neural Network Transfer Learning for Robust Face Recognition in NAO Humanoid Robot

    Get PDF
    Applications of transfer learning for convolutional neural networks (CNNs) have shown to be an efficient alternative for solving recognition tasks rather than designing and training a new neural network from scratch. However, there exists several popular CNN architectures available for various recognition tasks. Therefore, choosing an appropriate network for a specific recognition task, specifically designed for a humanoid robotic platform, is often challenging. This study evaluates the performance of two well-known CNN architectures; AlexNet, and VGG-Face for a face recognition task. This is accomplished by applying the transfer learning concept to the networks pre-trained for different recognition tasks. The proposed face recognition framework is then implemented on a humanoid robot known as NAO to demonstrate the practicality and flexibility of the algorithm. The results suggest that the proposed pipeline shows excellent performance in recognizing a new person from a single example image under varying distance and resolution conditions usually applicable to a mobile humanoid robotic platform

    3D LIDAR Human Skeleton Estimation for Long Range Identification

    Get PDF
    This research project is the continuation of an effort by the Vision Lab to understand human gait and motion using special-purpose imaging sensors and novel computer vision algorithms. The project began using a Motion Capture (MoCap) system which measures 3D human skeleton information in real-time by attaching markers to a human subject and viewing the human motion with a set of cameras at different angles. We developed an algorithm to determine the gender of a subject wearing these sensors. The current phase of this project extends this work using a state-of-the-art flash Lidar sensor. This sensor scans the surface of objects and gives a 3D depth map of the object in real-time. We developed a computer vision system that can estimate a human skeleton from Lidar data, which resembles the MoCap data format. Using these computed 3D skeletons we can perform human identification. The poster outlines the current Lidar-based algorithm using a flow chart to explain the input and output for each component of the system and how each data modality is used to build the final skeleton. The poster compares performance for our human identification with other methods for human- identification using 3D sensors. Finally, future work using infrared sensing is discussed.https://digitalcommons.odu.edu/gradposters2021_engineering/1003/thumbnail.jp

    Superconducting radio-frequency cavity fault classification using machine learning at Jefferson Laboratory

    Get PDF
    We report on the development of machine learning models for classifying C100 superconducting radio-frequency (SRF) cavity faults in the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. CEBAF is a continuous-wave recirculating linac utilizing 418 SRF cavities to accelerate electrons up to 12 GeV through 5-passes. Of these, 96 cavities (12 cryomodules) are designed with a digital low-level RF system configured such that a cavity fault triggers waveform recordings of 17 RF signals for each of the 8 cavities in the cryomodule. Subject matter experts (SME) are able to analyze the collected time-series data and identify which of the eight cavities faulted first and classify the type of fault. This information is used to find trends and strategically deploy mitigations to problematic cryomodules. However manually labeling the data is laborious and time-consuming. By leveraging machine learning, near real-time (rather than post-mortem) identification of the offending cavity and classification of the fault type has been implemented. We discuss performance of the ML models during a recent physics run. Results show the cavity identification and fault classification models have accuracies of 84.9% and 78.2%, respectively.Comment: 20 pages, 10 figures submitted to Physical Review Accelerators and Beam

    Feature-Guided Deep Radiomics for Glioblastoma Patient Survival Prediction

    Get PDF
    Glioblastoma is recognized as World Health Organization (WHO) grade IV glioma with an aggressive growth pattern. The current clinical practice in diagnosis and prognosis of Glioblastoma using MRI involves multiple steps including manual tumor sizing. Accurate identification and segmentation of multiple abnormal tissues within tumor volume in MRI is essential for precise survival prediction. Manual tumor and abnormal tissue detection and sizing are tedious, and subject to inter-observer variability. Consequently, this work proposes a fully automated MRI-based glioblastoma and abnormal tissue segmentation, and survival prediction framework. The framework includes radiomics feature-guided deep neural network methods for tumor tissue segmentation; followed by survival regression and classification using these abnormal tumor tissue segments and other relevant clinical features. The proposed multiple abnormal tumor tissue segmentation step effectively fuses feature-based and feature-guided deep radiomics information in structural MRI. The survival prediction step includes two representative survival prediction pipelines that combine different feature selection and regression approaches. The framework is evaluated using two recent widely used benchmark datasets from Brain Tumor Segmentation (BraTS) global challenges in 2017 and 2018. The best overall survival pipeline in the proposed framework achieves leave-one-out cross-validation (LOOCV) accuracy of 0.73 for training datasets and 0.68 for validation datasets, respectively. These training and validation accuracies for tumor patient survival prediction are among the highest reported in literature. Finally, a critical analysis of radiomics features and efficacy of these features in segmentation and survival prediction performance is presented as lessons learned

    Initial Studies of Cavity Fault Prediction at Jefferson Laboratory

    Get PDF
    The Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Laboratory is a CW recirculating linac that utilizes over 400 superconducting radio-frequency (SRF) cavities to accelerate electrons up to 12 GeV through 5-passes. Recent work has shown that, given RF signals from a cavity during a fault as input, machine learning approaches can accurately classify the fault type. In this paper we report on initial results of predicting a fault onset using only data prior to the failure event. A data set was constructed using time-series data immediately before a fault (’unstable’) and 1.5 seconds prior to a fault (’stable’) gathered from over 5,000 saved fault events. The data was used to train a binary classifier. The results gave key insights into the behavior of several fault types and provided motivation to investigate whether data prior to a failure event could also predict the type of fault. We discuss our method using a sliding window approach and report on initial results. Recent modifications to the low-level RF control system will provide access to streaming signals and we outline a path forward for leveraging deep learning on streaming dat
    • …
    corecore