13 research outputs found

    A Deep Learning Prediction Model Based on Extreme-Point Symmetric Mode Decomposition and Cluster Analysis

    Get PDF
    Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed IMFs and residuals are the final prediction results. Six kinds of prediction models are compared, which are DBN prediction model, EMD-DBN prediction model, EEMD-DBN prediction model, CEEMD-DBN prediction model, ESMD-DBN prediction model, and the proposed model in this paper. The same sunspots time series are predicted with six kinds of prediction models. The experimental results show that the proposed model has better prediction accuracy and smaller error

    Analyzing MRI scans to detect glioblastoma tumor using hybrid deep belief networks

    Get PDF
    Abstract Glioblastoma (GBM) is a stage 4 malignant tumor in which a large portion of tumor cells are reproducing and dividing at any moment. These tumors are life threatening and may result in partial or complete mental and physical disability. In this study, we have proposed a classification model using hybrid deep belief networks (DBN) to classify magnetic resonance imaging (MRI) for GBM tumor. DBN is composed of stacked restricted Boltzmann machines (RBM). DBN often requires a large number of hidden layers that consists of large number of neurons to learn the best features from the raw image data. Hence, computational and space complexity is high and requires a lot of training time. The proposed approach combines DTW with DBN to improve the efficiency of existing DBN model. The results are validated using several statistical parameters. Statistical validation verifies that the combination of DTW and DBN outperformed the other classifiers in terms of training time, space complexity and classification accuracy

    Deep invariant texture features for water image classification

    Full text link
    Detecting potential issues in naturally captured images of water is a challenging task due to visual similarities between clean and polluted water, as well as causes posed by image acquisition with different camera angles and placements. This paper presents novel deep invariant texture features along with a deep network for detecting clean and polluted water images. The proposed method first divides an input image into H, S and V components to extract finer details. For each of the color spaces, the proposed approach generates two directional coherence images based on Eigen value analysis and gradient distribution, which results in enhanced images. Then the proposed method extracts scale invariant gradient orientations based on Gaussian first order derivative filters on different standard deviations to study texture of each smoothed image. To strengthen the above features, we explore the combination of Gabor-wavelet-binary pattern for extracting texture of the input water image. The proposed method integrates merits of aforementioned features and the features extracted by VGG16 deep learning model to obtain a single feature vector. Furthermore, the extracted feature is fed to a gradient boosting decision tree for water image detection. A variety of experimental results on a large dataset containing different types of clean and stagnant water images show that the proposed method outperforms the existing methods in terms of classification rate and accuracy

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources

    Get PDF
    Central to the looming paradigm shift toward data-intensive science, machine-learning techniques are becoming increasingly important. In particular, deep learning has proven to be both a major breakthrough and an extremely powerful tool in many fields. Shall we embrace deep learning as the key to everything? Or should we resist a black-box solution? These are controversial issues within the remote-sensing community. In this article, we analyze the challenges of using deep learning for remote-sensing data analysis, review recent advances, and provide resources we hope will make deep learning in remote sensing seem ridiculously simple. More importantly, we encourage remote-sensing scientists to bring their expertise into deep learning and use it as an implicit general model to tackle unprecedented, large-scale, influential challenges, such as climate change and urbanization

    Convolutional Neural Networks for Land-cover Classification Using Multispectral Airborne Laser Scanning Data

    Get PDF
    With the spread of urban culture, urbanisation is progressing rapidly and globally. Accurate and update land cover (LC) information becomes increasingly critical for protecting ecosystems, climate change studies and sustainable human-environment development. It has been verified that combining spectral information from remotely sensed imagery and 3D spatial information from airborne laser scanning (ALS) point clouds has achieved better LC classification accuracy than that obtained by using either of them solely. However, data fusions can introduce multiple errors. To solve this problem, multispectral ALS developed recently is able to acquire point cloud data with multiple spectral channels simultaneously. Moreover, deep neural networks have been proved to be a better option for LC classification than those statistical classification approaches. This study aims to develop a workflow for automated pixel-wise LC classification from multispectral ALS data using deep-learning methods. A total of six input datasets with a multi-tiered architecture and three deep-learning classification networks (i.e. 1D CNN, 2D CNN, and 3D CNN) have been established to seek the optimal scheme that lead to highest classification accuracy. The highest overall classification accuracy of 97.2% has been achieved using the proposed 3D CNN and the designed input dataset. In regard to the proposed CNNs, the overall accuracy (OA) of the 2D and 3D CNNs was, on average, 8.4% higher than that of the 1D CNN. Although the OA of the 2D CNN was at most 0.3% lower than that of the 3D CNN, the run time of the 3D CNN was five times longer than the 2D CNN. Thus, the 2D CNN was the best choice for the multispectral ALS LC classification when considering efficiency. For different input datasets, the OA of the designed input datasets was, on average, 3.8% higher than that of the classic input datasets. Results also showed that the multispectral ALS data is superior to both multispectral optical imagery and single-wavelength ALS data for LC classification. In conclusion, this thesis suggests that LC classification can be improved with the use of multispectral ALS data and deep-learning methods

    Brain Tumor Diagnosis Support System: A decision Fusion Framework

    Get PDF
    An important factor in providing effective and efficient therapy for brain tumors is early and accurate detection, which can increase survival rates. Current image-based tumor detection and diagnosis techniques are heavily dependent on interpretation by neuro-specialists and/or radiologists, making the evaluation process time-consuming and prone to human error and subjectivity. Besides, widespread use of MR spectroscopy requires specialized processing and assessment of the data and obvious and fast show of the results as photos or maps for routine medical interpretative of an exam. Automatic brain tumor detection and classification have the potential to offer greater efficiency and predictions that are more accurate. However, the performance accuracy of automatic detection and classification techniques tends to be dependent on the specific image modality and is well known to vary from technique to technique. For this reason, it would be prudent to examine the variations in the execution of these methods to obtain consistently high levels of achievement accuracy. Designing, implementing, and evaluating categorization software is the goal of the suggested framework for discerning various brain tumor types on magnetic resonance imaging (MRI) using textural features. This thesis introduces a brain tumor detection support system that involves the use of a variety of tumor classifiers. The system is designed as a decision fusion framework that enables these multi-classifier to analyze medical images, such as those obtained from magnetic resonance imaging (MRI). The fusion procedure is ground on the Dempster-Shafer evidence fusion theory. Numerous experimental scenarios have been implemented to validate the efficiency of the proposed framework. Compared with alternative approaches, the outcomes show that the methodology developed in this thesis demonstrates higher accuracy and higher computational efficiency
    corecore