1,490 research outputs found

    Feature extraction and fusion for classification of remote sensing imagery

    Get PDF

    Confidence Propagation through CNNs for Guided Sparse Depth Regression

    Full text link
    Generally, convolutional neural networks (CNNs) process data on a regular grid, e.g. data generated by ordinary cameras. Designing CNNs for sparse and irregularly spaced input data is still an open research problem with numerous applications in autonomous driving, robotics, and surveillance. In this paper, we propose an algebraically-constrained normalized convolution layer for CNNs with highly sparse input that has a smaller number of network parameters compared to related work. We propose novel strategies for determining the confidence from the convolution operation and propagating it to consecutive layers. We also propose an objective function that simultaneously minimizes the data error while maximizing the output confidence. To integrate structural information, we also investigate fusion strategies to combine depth and RGB information in our normalized convolution network framework. In addition, we introduce the use of output confidence as an auxiliary information to improve the results. The capabilities of our normalized convolution network framework are demonstrated for the problem of scene depth completion. Comprehensive experiments are performed on the KITTI-Depth and the NYU-Depth-v2 datasets. The results clearly demonstrate that the proposed approach achieves superior performance while requiring only about 1-5% of the number of parameters compared to the state-of-the-art methods.Comment: 14 pages, 14 Figure

    Object-based Urban Building Footprint Extraction and 3D Building Reconstruction from Airborne LiDAR Data

    Get PDF
    Buildings play an essential role in urban intra-construction, urban planning, climate studies and disaster management. The precise knowledge of buildings not only serves as a primary source for interpreting complex urban characteristics, but also provides decision makers with more realistic and multidimensional scenarios for urban management. In this thesis, the 2D extraction and 3D reconstruction methods are proposed to map and visualize urban buildings. Chapter 2 presents an object-based method for extraction of building footprints using LiDAR derived NDTI (Normalized Difference Tree Index) and intensity data. The overall accuracy of 94.0% and commission error of 6.3% in building extraction is achieved with the Kappa of 0.84. Chapter 3 presents a GIS-based 3D building reconstruction method. The results indicate that the method is effective for generating 3D building models. The 91.4% completeness of roof plane identification is achieved, and the overall accuracy of the flat and pitched roof plane classification is 88.81%, with the user’s accuracy of the flat roof plane 97.75% and pitched roof plane 100%

    Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    Get PDF
    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems

    Combining feature fusion and decision fusion for classification of hyperspectral and LiDAR data

    Get PDF
    This paper proposes a method to combine feature fusion and decision fusion together for multi-sensor data classification. First, morphological features which contain elevation and spatial information, are generated on both LiDAR data and the first few principal components (PCs) of original hyperspectral (HS) image. We got the fused features by projecting the spectral (original HS image), spatial and elevation features onto a lower subspace through a graph-based feature fusion method. Then, we got four classification maps by using spectral features, spatial features, elevation features and the graph fused features individually as input of SVM classifier. The final classification map was obtained by fusing the four classification maps through the weighted majority voting. Experimental results on fusion of HS and LiDAR data from the 2013 IEEE GRSS Data Fusion Contest demonstrate effectiveness of the proposed method. Compared to the methods using single data source or only feature fusion, with the proposed method, overall classification accuracies were improved by 10% and 2%, respectively

    Vegetation Detection and Classification for Power Line Monitoring

    Get PDF
    Electrical network maintenance inspections must be regularly executed, to provide a continuous distribution of electricity. In forested countries, the electrical network is mostly located within the forest. For this reason, during these inspections, it is also necessary to assure that vegetation growing close to the power line does not potentially endanger it, provoking forest fires or power outages. Several remote sensing techniques have been studied in the last years to replace the labor-intensive and costly traditional approaches, be it field based or airborne surveillance. Besides the previously mentioned disadvantages, these approaches are also prone to error, since they are dependent of a human operator’s interpretation. In recent years, Unmanned Aerial Vehicle (UAV) platform applicability for this purpose has been under debate, due to its flexibility and potential for customisation, as well as the fact it can fly close to the power lines. The present study proposes a vegetation management and power line monitoring method, using a UAV platform. This method starts with the collection of point cloud data in a forest environment composed of power line structures and vegetation growing close to it. Following this process, multiple steps are taken, including: detection of objects in the working environment; classification of said objects into their respective class labels using a feature-based classifier, either vegetation or power line structures; optimisation of the classification results using point cloud filtering or segmentation algorithms. The method is tested using both synthetic and real data of forested areas containing power line structures. The Overall Accuracy of the classification process is about 87% and 97-99% for synthetic and real data, respectively. After the optimisation process, these values were refined to 92% for synthetic data and nearly 100% for real data. A detailed comparison and discussion of results is presented, providing the most important evaluation metrics and a visual representations of the attained results.Manutenções regulares da rede elétrica devem ser realizadas de forma a assegurar uma distribuição contínua de eletricidade. Em países com elevada densidade florestal, a rede elétrica encontra-se localizada maioritariamente no interior das florestas. Por isso, durante estas inspeções, é necessário assegurar também que a vegetação próxima da rede elétrica não a coloca em risco, provocando incêndios ou falhas elétricas. Diversas técnicas de deteção remota foram estudadas nos últimos anos para substituir as tradicionais abordagens dispendiosas com mão-de-obra intensiva, sejam elas através de vigilância terrestre ou aérea. Além das desvantagens mencionadas anteriormente, estas abordagens estão também sujeitas a erros, pois estão dependentes da interpretação de um operador humano. Recentemente, a aplicabilidade de plataformas com Unmanned Aerial Vehicles (UAV) tem sido debatida, devido à sua flexibilidade e potencial personalização, assim como o facto de conseguirem voar mais próximas das linhas elétricas. O presente estudo propõe um método para a gestão da vegetação e monitorização da rede elétrica, utilizando uma plataforma UAV. Este método começa pela recolha de dados point cloud num ambiente florestal composto por estruturas da rede elétrica e vegetação em crescimento próximo da mesma. Em seguida,múltiplos passos são seguidos, incluindo: deteção de objetos no ambiente; classificação destes objetos com as respetivas etiquetas de classe através de um classificador baseado em features, vegetação ou estruturas da rede elétrica; otimização dos resultados da classificação utilizando algoritmos de filtragem ou segmentação de point cloud. Este método é testado usando dados sintéticos e reais de áreas florestais com estruturas elétricas. A exatidão do processo de classificação é cerca de 87% e 97-99% para os dados sintéticos e reais, respetivamente. Após o processo de otimização, estes valores aumentam para 92% para os dados sintéticos e cerca de 100% para os dados reais. Uma comparação e discussão de resultados é apresentada, fornecendo as métricas de avaliação mais importantes e uma representação visual dos resultados obtidos

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    Low-Shot Learning for the Semantic Segmentation of Remote Sensing Imagery

    Get PDF
    Deep-learning frameworks have made remarkable progress thanks to the creation of large annotated datasets such as ImageNet, which has over one million training images. Although this works well for color (RGB) imagery, labeled datasets for other sensor modalities (e.g., multispectral and hyperspectral) are minuscule in comparison. This is because annotated datasets are expensive and man-power intensive to complete; and since this would be impractical to accomplish for each type of sensor, current state-of-the-art approaches in computer vision are not ideal for remote sensing problems. The shortage of annotated remote sensing imagery beyond the visual spectrum has forced researchers to embrace unsupervised feature extracting frameworks. These features are learned on a per-image basis, so they tend to not generalize well across other datasets. In this dissertation, we propose three new strategies for learning feature extracting frameworks with only a small quantity of annotated image data; including 1) self-taught feature learning, 2) domain adaptation with synthetic imagery, and 3) semi-supervised classification. ``Self-taught\u27\u27 feature learning frameworks are trained with large quantities of unlabeled imagery, and then these networks extract spatial-spectral features from annotated data for supervised classification. Synthetic remote sensing imagery can be used to boot-strap a deep convolutional neural network, and then we can fine-tune the network with real imagery. Semi-supervised classifiers prevent overfitting by jointly optimizing the supervised classification task along side one or more unsupervised learning tasks (i.e., reconstruction). Although obtaining large quantities of annotated image data would be ideal, our work shows that we can make due with less cost-prohibitive methods which are more practical to the end-user
    corecore