953 research outputs found

    Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    Get PDF
    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems

    Novel Multi-Scale Filter Profile-Based Framework for VHR Remote Sensing Image Classification

    Get PDF
    Publisher's version (útgefin grein).Filter is a well-known tool for noise reduction of very high spatial resolution (VHR) remote sensing images. However, a single-scale filter usually demonstrates limitations in covering various targets with different sizes and shapes in a given image scene. A novel method called multi-scale filter profile (MFP)-based framework (MFPF) is introduced in this study to improve the classification performance of a remote sensing image of VHR and address the aforementioned problem. First, an adaptive filter is extended with a series of parameters for MFP construction. Then, a layer-stacking technique is used to concatenate the MPFs and all the features into a stacked vector. Afterward, principal component analysis, a classical descending dimension algorithm, is performed on the fused profiles to reduce the redundancy of the stacked vector. Finally, the spatial adaptive region of each filter in the MFPs is used for post-processing of the obtained initial classification map through a supervised classifier. This process aims to revise the initial classification map and generate a final classification map. Experimental results performed on the three real VHR remote sensing images demonstrate the effectiveness of the proposed MFPF in comparison with the state-of-the-art methods. Hard-tuning parameters are unnecessary in the application of the proposed approach. Thus, such a method can be conveniently applied in real applications.This research was funded by the National Science Foundation China (61701396 and 41501378) and the Natural Science Foundation of Shaan Xi Province (2018JQ4009).Peer Reviewe

    Automatic Extraction of Number of Lanes from Aerial Images for Transportation Applications

    Get PDF
    Number of lanes is a basic roadway attribute that is widely used in many transportation applications. Traditionally, number of lanes is collected and updated through field surveys, which is expensive especially for large coverage areas with a high volume of road segments. One alternative is through manual data extraction from high-resolution aerial images. However, this is feasible only for smaller areas. For large areas that may involve tens of thousands of aerial images and millions of road segments, an automatic extraction is a more feasible approach. This dissertation aims to improve the existing process of extracting number of lanes from aerial images automatically by making improvements in three specific areas: (1) performance of lane model, (2) automatic acquisition of external knowledge, and (3) automatic lane location identification and reliability estimation. In this dissertation, a framework was developed to automatically recognize and extract number of lanes from geo-rectified aerial images. In order to address the external knowledge acquisition problem in this framework, a mapping technique was developed to automatically estimate the approximate pixel locations of road segments and the travel direction of the target roads in aerial images. A lane model was developed based on the typical appearance features of travel lanes in color aerial images. It provides more resistance to “noise” such as presence of vehicle occlusions and sidewalks. Multi-class classification test results based on the K-nearest neighbor, logistic regression, and Support Vector Machine (SVM) classification algorithms showed that the new model provides a high level of prediction accuracy. Two optimization algorithms based on fixed and flexible lane widths, respectively, were then developed to extract number of lanes from the lane model output. The flexible lane-width approach was recommended because it solved the problems of error-tolerant pixel mapping and reliability estimation. The approach was tested using a lane model with two SVM classifiers, i.e., the Polynomial kernel and the Radial Basis Function (RBF) kernel. The results showed that the framework yielded good performance in a general test scenario with mixed types of road segments and another test scenario with heavy plant occlusions
    corecore