1,175 research outputs found

    Schroedinger Eigenmaps for Manifold Alignment of Multimodal Hyperspectral Images

    Get PDF
    Multimodal remote sensing is an upcoming field as it allows for many views of the same region of interest. Domain adaption attempts to fuse these multimodal remotely sensed images by utilizing the concept of transfer learning to understand data from different sources to learn a fused outcome. Semisupervised Manifold Alignment (SSMA) maps multiple Hyperspectral images (HSIs) from high dimensional source spaces to a low dimensional latent space where similar elements reside closely together. SSMA preserves the original geometric structure of respective HSIs whilst pulling similar data points together and pushing dissimilar data points apart. The SSMA algorithm is comprised of a geometric component, a similarity component and dissimilarity component. The geometric component of the SSMA method has roots in the original Laplacian Eigenmaps (LE) dimension reduction algorithm and the projection functions have roots in the original Locality Preserving Projections (LPP) dimensionality reduction framework. The similarity and dissimilarity component is a semisupervised component that allows expert labeled information to improve the image fusion process. Spatial-Spectral Schroedinger Eigenmaps (SSSE) was designed as a semisupervised enhancement to the LE algorithm by augmenting the Laplacian matrix with a user-defined potential function. However, the user-defined enhancement has yet to be explored in the LPP framework. The first part of this thesis proposes to use the Spatial-Spectral potential within the LPP algorithm, creating a new algorithm we call the Schroedinger Eigenmap Projections (SEP). Through experiments on publicly available data with expert-labeled ground truth, we perform experiments to compare the performance of the SEP algorithm with respect to the LPP algorithm. The second part of this thesis proposes incorporating the Spatial Spectral potential from SSSE into the SSMA framework. Using two multi-angled HSI’s, we explore the impact of incorporating this potential into SSMA

    Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    Get PDF
    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems

    Segmentation and Classification of Multimodal Imagery

    Get PDF
    Segmentation and classification are two important computer vision tasks that transform input data into a compact representation that allow fast and efficient analysis. Several challenges exist in generating accurate segmentation or classification results. In a video, for example, objects often change the appearance and are partially occluded, making it difficult to delineate the object from its surroundings. This thesis proposes video segmentation and aerial image classification algorithms to address some of the problems and provide accurate results. We developed a gradient driven three-dimensional segmentation technique that partitions a video into spatiotemporal objects. The algorithm utilizes the local gradient computed at each pixel location together with the global boundary map acquired through deep learning methods to generate initial pixel groups by traversing from low to high gradient regions. A local clustering method is then employed to refine these initial pixel groups. The refined sub-volumes in the homogeneous regions of video are selected as initial seeds and iteratively combined with adjacent groups based on intensity similarities. The volume growth is terminated at the color boundaries of the video. The over-segments obtained from the above steps are then merged hierarchically by a multivariate approach yielding a final segmentation map for each frame. In addition, we also implemented a streaming version of the above algorithm that requires a lower computational memory. The results illustrate that our proposed methodology compares favorably well, on a qualitative and quantitative level, in segmentation quality and computational efficiency with the latest state of the art techniques. We also developed a convolutional neural network (CNN)-based method to efficiently combine information from multisensor remotely sensed images for pixel-wise semantic classification. The CNN features obtained from multiple spectral bands are fused at the initial layers of deep neural networks as opposed to final layers. The early fusion architecture has fewer parameters and thereby reduces the computational time and GPU memory during training and inference. We also introduce a composite architecture that fuses features throughout the network. The methods were validated on four different datasets: ISPRS Potsdam, Vaihingen, IEEE Zeebruges, and Sentinel-1, Sentinel-2 dataset. For the Sentinel-1,-2 datasets, we obtain the ground truth labels for three classes from OpenStreetMap. Results on all the images show early fusion, specifically after layer three of the network, achieves results similar to or better than a decision level fusion mechanism. The performance of the proposed architecture is also on par with the state-of-the-art results

    Development of Mining Sector Applications for Emerging Remote Sensing and Deep Learning Technologies

    Get PDF
    This thesis uses neural networks and deep learning to address practical, real-world problems in the mining sector. The main focus is on developing novel applications in the area of object detection from remotely sensed data. This area has many potential mining applications and is an important part of moving towards data driven strategic decision making across the mining sector. The scientific contributions of this research are twofold; firstly, each of the three case studies demonstrate new applications which couple remote sensing and neural network based technologies for improved data driven decision making. Secondly, the thesis presents a framework to guide implementation of these technologies in the mining sector, providing a guide for researchers and professionals undertaking further studies of this type. The first case study builds a fully connected neural network method to locate supporting rock bolts from 3D laser scan data. This method combines input features from the remote sensing and mobile robotics research communities, generating accuracy scores up to 22% higher than those found using either feature set in isolation. The neural network approach also is compared to the widely used random forest classifier and is shown to outperform this classifier on the test datasets. Additionally, the algorithms’ performance is enhanced by adding a confusion class to the training data and by grouping the output predictions using density based spatial clustering. The method is tested on two datasets, gathered using different laser scanners, in different types of underground mines which have different rock bolting patterns. In both cases the method is found to be highly capable of detecting the rock bolts with recall scores of 0.87-0.96. The second case study investigates modern deep learning for LiDAR data. Here, multiple transfer learning strategies and LiDAR data representations are examined for the task of identifying historic mining remains. A transfer learning approach based on a Lunar crater detection model is used, due to the task similarities between both the underlying data structures and the geometries of the objects to be detected. The relationship between dataset resolution and detection accuracy is also examined, with the results showing that the approach is capable of detecting pits and shafts to a high degree of accuracy with precision and recall scores between 0.80-0.92, provided the input data is of sufficient quality and resolution. Alongside resolution, different LiDAR data representations are explored, showing that the precision-recall balance varies depending on the input LiDAR data representation. The third case study creates a deep convolutional neural network model to detect artisanal scale mining from multispectral satellite data. This model is trained from initialisation without transfer learning and demonstrates that accurate multispectral models can be built from a smaller training dataset when appropriate design and data augmentation strategies are adopted. Alongside the deep learning model, novel mosaicing algorithms are developed both to improve cloud cover penetration and to decrease noise in the final prediction maps. When applied to the study area, the results from this model provide valuable information about the expansion, migration and forest encroachment of artisanal scale mining in southwestern Ghana over the last four years. Finally, this thesis presents an implementation framework for these neural network based object detection models, to generalise the findings from this research to new mining sector deep learning tasks. This framework can be used to identify applications which would benefit from neural network approaches; to build the models; and to apply these algorithms in a real world environment. The case study chapters confirm that the neural network models are capable of interpreting remotely sensed data to a high degree of accuracy on real world mining problems, while the framework guides the development of new models to solve a wide range of related challenges

    Soil temperature investigations using satellite acquired thermal-infrared data in semi-arid regions

    Get PDF
    Thermal-infrared data from the Heat Capacity Mapping Mission satellite were used to map the spatial distribution of diurnal surface temperatures and to estimate mean annual soil temperatures (MAST) and annual surface temperature amplitudes (AMP) in semi-arid east central Utah. Diurnal data with minimal snow and cloud cover were selected for five dates throughout a yearly period and geometrically co-registered. Rubber-sheet stretching was aided by the WARP program which allowed preview of image transformations. Daytime maximum and nighttime minimum temperatures were averaged to generation average daily temperature (ADT) data set for each of the five dates. Five ADT values for each pixel were used to fit a sine curve describing the theoretical annual surface temperature response as defined by a solution of a one-dimensinal heat flow equation. Linearization of the equation produced estimates of MAST and AMP plus associated confidence statistics. MAST values were grouped into classes and displayed on a color video screen. Diurnal surface temperatures and MAST were primarily correlated with elevation

    A Class-Oriented Strategy for Features Extraction from Multidate ASTER Imagery

    Get PDF
    In this paper we propose a hybrid classification method, adopting the best features extraction strategy for each land cover class on multidate ASTER data. To enable an effective comparison among images, Multivariate Alteration Detection (MAD) transformation was applied in the pre-processing phase, because of its high level of automation and reliability in the enhancement of change information among different images. Consequently, different features identification procedures, both spectral and object-based, were implemented to overcome problems of misclassification among classes with similar spectral response. Lastly, a post-classification comparison was performed on multidate ASTER-derived land cover (LC) maps to evaluate the effects of change in the study area

    Automatic road network extraction in suburban areas from aerial images

    Get PDF
    [no abstract

    AOIPS water resources data management system

    Get PDF
    A geocoded data management system applicable for hydrological applications was designed to demonstrate the utility of the Atmospheric and Oceanographic Information Processing System (AOIPS) for hydrological applications. Within that context, the geocoded hydrology data management system was designed to take advantage of the interactive capability of the AOIPS hardware. Portions of the Water Resource Data Management System which best demonstrate the interactive nature of the hydrology data management system were implemented on the AOIPS. A hydrological case study was prepared using all data supplied for the Bear River watershed located in northwest Utah, southeast Idaho, and western Wyoming

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    Manifold learning based spectral unmixing of hyperspectral remote sensing data

    Get PDF
    Nonlinear mixing effects inherent in hyperspectral data are not properly represented in linear spectral unmixing models. Although direct nonlinear unmixing models provide capability to capture nonlinear phenomena, they are difficult to formulate and the results are not always generalizable. Manifold learning based spectral unmixing accommodates nonlinearity in the data in the feature extraction stage followed by linear mixing, thereby incorporating some characteristics of nonlinearity while retaining advantages of linear unmixing approaches. Since endmember selection is critical to successful spectral unmixing, it is important to select proper endmembers from the manifold space. However, excessive computational burden hinders development of manifolds for large-scale remote sensing datasets. This dissertation addresses issues related to high computational overhead requirements of manifold learning for developing representative manifolds for the spectral unmixing task. Manifold approximations using landmarks are popular for mitigating the computational complexity of manifold learning. A new computationally effective landmark selection method that exploits spatial redundancy in the imagery is proposed. A robust, less costly landmark set with low spectral and spatial redundancy is successfully incorporated with a hybrid manifold which shares properties of both global and local manifolds. While landmark methods reduce computational demand, the resulting manifolds may not represent subtle features of the manifold adequately. Active learning heuristics are introduced to increase the number of landmarks, with the goal of developing more representative manifolds for spectral unmixing. By communicating between the landmark set and the query criteria relative to spectral unmixing, more representative and stable manifolds with less spectrally and spatially redundant landmarks are developed. A new ranking method based on the pixels with locally high spectral variability within image subsets and convex-geometry finds a solution more quickly and precisely. Experiments were conducted to evaluate the proposed methods using the AVIRIS Cuprite hyperspectral reference dataset. A case study of manifold learning based spectral unmixing in agricultural areas is included in the dissertation.Remotely sensed data collected by airborne or spaceborne sensors are utilized to quantify crop residue cover over an extensive area. Although remote sensing indices are popular for characterizing residue amounts, they are not effective with noisy Hyperion data because the effect of residual striping artifacts is amplified in ratios involving band differences. In this case study, spectral unmixing techniques are investigated for estimating crop residue as an alternative approach to empirical models developed using band based indices. The spectral unmixing techniques, and especially the manifold learning approaches, provide more robust, lower RMSE estimates for crop residue cover than the hyperspectral index based method for Hyperion data
    • …
    corecore