8 research outputs found

    Survey on Faster Region Convolution Neural Network for Object Detection

    Get PDF
    Convolution Neural Networks uses the concepts of deep learning and becomes the golden standard for image classification. This algorithm was implemented even in complicated sights with multiple overlapping objects, different backgrounds and it also successfully identified and classified objects along with their boundaries, differences and relations to one another. Then comes Region-based Convolutional Neural Networks(R-CNN)which is further more described into two types that is Fast R-CNN and Faster R-CNN. This R-CNN method is to use selective search to extract only 2000 regions from the image and cannot be implemented in real time as it would take 47 sec approximately for each test image. Then comes the fast R-CNN in which changes are made to overcome the drawbacks in R-CNN algorithm in which the 2000 region proposals are not fed to the CNN instead the image is fed directly to the CNN to generate Convolutional feature map. This was then replaced by faster R-CNN which came up with an object detection algorithm that eliminates the selective search algorithm to perform the operation. This algorithm takes 0.2 sec approximately for the test image and we will be using this for real time object detection.So, basically in this paper we are doing research on Faster R-CNN that is being used for object detection method

    Classification of Hyperspectral and LiDAR Data Using Coupled CNNs

    Get PDF
    In this paper, we propose an efficient and effective framework to fuse hyperspectral and Light Detection And Ranging (LiDAR) data using two coupled convolutional neural networks (CNNs). One CNN is designed to learn spectral-spatial features from hyperspectral data, and the other one is used to capture the elevation information from LiDAR data. Both of them consist of three convolutional layers, and the last two convolutional layers are coupled together via a parameter sharing strategy. In the fusion phase, feature-level and decision-level fusion methods are simultaneously used to integrate these heterogeneous features sufficiently. For the feature-level fusion, three different fusion strategies are evaluated, including the concatenation strategy, the maximization strategy, and the summation strategy. For the decision-level fusion, a weighted summation strategy is adopted, where the weights are determined by the classification accuracy of each output. The proposed model is evaluated on an urban data set acquired over Houston, USA, and a rural one captured over Trento, Italy. On the Houston data, our model can achieve a new record overall accuracy of 96.03%. On the Trento data, it achieves an overall accuracy of 99.12%. These results sufficiently certify the effectiveness of our proposed model

    Spectral-Spatial Graph Reasoning Network for Hyperspectral Image Classification

    Full text link
    In this paper, we propose a spectral-spatial graph reasoning network (SSGRN) for hyperspectral image (HSI) classification. Concretely, this network contains two parts that separately named spatial graph reasoning subnetwork (SAGRN) and spectral graph reasoning subnetwork (SEGRN) to capture the spatial and spectral graph contexts, respectively. Different from the previous approaches implementing superpixel segmentation on the original image or attempting to obtain the category features under the guide of label image, we perform the superpixel segmentation on intermediate features of the network to adaptively produce the homogeneous regions to get the effective descriptors. Then, we adopt a similar idea in spectral part that reasonably aggregating the channels to generate spectral descriptors for spectral graph contexts capturing. All graph reasoning procedures in SAGRN and SEGRN are achieved through graph convolution. To guarantee the global perception ability of the proposed methods, all adjacent matrices in graph reasoning are obtained with the help of non-local self-attention mechanism. At last, by combining the extracted spatial and spectral graph contexts, we obtain the SSGRN to achieve a high accuracy classification. Extensive quantitative and qualitative experiments on three public HSI benchmarks demonstrate the competitiveness of the proposed methods compared with other state-of-the-art approaches

    Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review

    Get PDF
    Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends

    Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN

    No full text
    corecore