287 research outputs found

    Hyperspectral Image Classification

    Get PDF
    Hyperspectral image (HSI) classification is a phenomenal mechanism to analyze diversified land cover in remotely sensed hyperspectral images. In the field of remote sensing, HSI classification has been an established research topic, and herein, the inherent primary challenges are (i) curse of dimensionality and (ii) insufficient samples pool during training. Given a set of observations with known class labels, the basic goal of hyperspectral image classification is to assign a class label to each pixel. This chapter discusses the recent progress in the classification of HS images in the aspects of Kernel-based methods, supervised and unsupervised classifiers, classification based on sparse representation, and spectral-spatial classification. Further, the classification methods based on machine learning and the future directions are discussed

    Sparse Coding for Data Augmentation of Hyperspectral Medical Images

    Get PDF
    Hyperspectral imaging presents detailed information about the electromagneticspectrum of an object in three dimensions. The significant point about the hyperspectral images is that it contains tens or hundreds of spectral layers, which provide precise data about the composition of the studied material. Therefore, hyperspectral images have been popular in many fields of study, such as medical diagnostic imaging. Speed and precision are key points to save human life in disease diagnosis, and applying machine learning techniques to medical hyperspectral images helps answer this need. Convolutional neural networka are one of the most popular machine learning methods for classifying medical images. However, training neural networks, in general, requires a large dataset, and the small size of medical imaging datasets results in a problem. In this thesis, we propose sparse coding algorithms to regenerate the hyperspectral data and feed it to the CNN model for training. This issue can be solved with the help of sparse coding algorithms. We focus on a colon cancer hyperspectral image dataset and different sparse coding methods utilizing K-SVD and A+ (with and without patching) as dictionary learning methods. The new reconstructed images have been added to the original image set and provided three new training sets with doubled number of images (246) for training the CNN. Using the augmented datasets, the test accuracy has risen to 86.53%, which is 30.13% higher than the original dataset (56.4%). We have also generated another dataset which is a mixture of the three reconstruction methods, and increased the number of training images to 266. Using the mixed dataset, the accuracy has reached 94.23%, and the difference between the test and training accuracy has dropped by 15.42%. Also, the precision has increased to 100%, which means there is no non-malignant image classified as a lesional image

    Deep feature fusion via two-stream convolutional neural network for hyperspectral image classification

    Get PDF
    The representation power of convolutional neural network (CNN) models for hyperspectral image (HSI) analysis is in practice limited by the available amount of the labeled samples, which is often insufficient to sustain deep networks with many parameters. We propose a novel approach to boost the network representation power with a two-stream 2-D CNN architecture. The proposed method extracts simultaneously, the spectral features and local spatial and global spatial features, with two 2-D CNN networks and makes use of channel correlations to identify the most informative features. Moreover, we propose a layer-specific regularization and a smooth normalization fusion scheme to adaptively learn the fusion weights for the spectral-spatial features from the two parallel streams. An important asset of our model is the simultaneous training of the feature extraction, fusion, and classification processes with the same cost function. Experimental results on several hyperspectral data sets demonstrate the efficacy of the proposed method compared with the state-of-the-art methods in the field

    Land Use and Land Cover Classification Using Deep Learning Techniques

    Get PDF
    abstract: Large datasets of sub-meter aerial imagery represented as orthophoto mosaics are widely available today, and these data sets may hold a great deal of untapped information. This imagery has a potential to locate several types of features; for example, forests, parking lots, airports, residential areas, or freeways in the imagery. However, the appearances of these things vary based on many things including the time that the image is captured, the sensor settings, processing done to rectify the image, and the geographical and cultural context of the region captured by the image. This thesis explores the use of deep convolutional neural networks to classify land use from very high spatial resolution (VHR), orthorectified, visible band multispectral imagery. Recent technological and commercial applications have driven the collection a massive amount of VHR images in the visible red, green, blue (RGB) spectral bands, this work explores the potential for deep learning algorithms to exploit this imagery for automatic land use/ land cover (LULC) classification. The benefits of automatic visible band VHR LULC classifications may include applications such as automatic change detection or mapping. Recent work has shown the potential of Deep Learning approaches for land use classification; however, this thesis improves on the state-of-the-art by applying additional dataset augmenting approaches that are well suited for geospatial data. Furthermore, the generalizability of the classifiers is tested by extensively evaluating the classifiers on unseen datasets and we present the accuracy levels of the classifier in order to show that the results actually generalize beyond the small benchmarks used in training. Deep networks have many parameters, and therefore they are often built with very large sets of labeled data. Suitably large datasets for LULC are not easy to come by, but techniques such as refinement learning allow networks trained for one task to be retrained to perform another recognition task. Contributions of this thesis include demonstrating that deep networks trained for image recognition in one task (ImageNet) can be efficiently transferred to remote sensing applications and perform as well or better than manually crafted classifiers without requiring massive training data sets. This is demonstrated on the UC Merced dataset, where 96% mean accuracy is achieved using a CNN (Convolutional Neural Network) and 5-fold cross validation. These results are further tested on unrelated VHR images at the same resolution as the training set.Dissertation/ThesisMasters Thesis Computer Science 201
    • …
    corecore