682 research outputs found

    Algorithm theoretical basis document

    Get PDF

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Estimating the crop leaf area index using hyperspectral remote sensing

    Get PDF
    AbstractThe leaf area index (LAI) is an important vegetation parameter, which is used widely in many applications. Remote sensing techniques are known to be effective but inexpensive methods for estimating the LAI of crop canopies. During the last two decades, hyperspectral remote sensing has been employed increasingly for crop LAI estimation, which requires unique technical procedures compared with conventional multispectral data, such as denoising and dimension reduction. Thus, we provide a comprehensive and intensive overview of crop LAI estimation based on hyperspectral remote sensing techniques. First, we compare hyperspectral data and multispectral data by highlighting their potential and limitations in LAI estimation. Second, we categorize the approaches used for crop LAI estimation based on hyperspectral data into three types: approaches based on statistical models, physical models (i.e., canopy reflectance models), and hybrid inversions. We summarize and evaluate the theoretical basis and different methods employed by these approaches (e.g., the characteristic parameters of LAI, regression methods for constructing statistical predictive models, commonly applied physical models, and inversion strategies for physical models). Thus, numerous models and inversion strategies are organized in a clear conceptual framework. Moreover, we highlight the technical difficulties that may hinder crop LAI estimation, such as the “curse of dimensionality” and the ill-posed problem. Finally, we discuss the prospects for future research based on the previous studies described in this review

    Using mixed objects in the training of object-based image classifications

    Get PDF
    Image classification for thematic mapping is a very common application in remote sensing, which is sometimes realized through object-based image analysis. In these analyses, it is common for some of the objects to be mixed in their class composition and thus violate the commonly made assumption of object purity that is implicit in a conventional object-based image analysis. Mixed objects can be a problem throughout a classification analysis, but are particularly challenging in the training stage as they can result in degraded training statistics and act to reduce mapping accuracy. In this paper the potential of using mixed objects in training object-based image classifications is evaluated. Remotely sensed data were submitted to a series of segmentation analyses from which a range of under- to over-segmented outputs were intentionally produced. Training objects were then selected from the segmentation outputs, resulting in training data sets that varied in terms of size (i.e. number of objects) and proportion of mixed objects. These training data sets were then used with an artificial neural network and a generalized linear model, which can accommodate objects of mixed composition, to produce a series of land cover maps. The use of training statistics estimated based on both pure and mixed objects often increased classification accuracy by around 25% when compared with accuracies obtained from the use of only pure objects in training. So rather than the mixed objects being a problem, they can be an asset in classification and facilitate land cover mapping from remote sensing. It is, therefore, desirable to recognize the nature of the objects and possibly accommodate mixed objects directly in training. The results obtained here may also have implications for the common practice of seeking an optimal segmentation output, and also act to challenge the widespread view that object-based classification is superior to pixel-based classification

    A Multi Views Approach for Remote Sensing Fusion Based on Spectral, Spatial and Temporal Information

    Get PDF
    The objectives of this chapter are to contribute to the apprehension of image fusion approaches including concepts definition, techniques ethics and results assessment. It is structured in five sections. Following this introduction, a definition of image fusion provides involved fundamental concepts. Respectively, we explain cases in which image fusion might be useful. Most existing techniques and architectures are reviewed and classified in the third section. In fourth section, we focuses heavily on algorithms based on multi-views approach, we compares and analyses the process model and algorithms including advantages, limitations and applicability of each view. The last part of the chapter summarized the benefits and limitations of a multi-view approach image fusion; it gives some recommendations on the effectiveness and the performance of these methods. These recommendations, based on a comprehensive study and meaningful quantitative metrics, evaluate various proposed views by applying them to various environmental applications with different remotely sensed images coming from different sensors. In the concluding section, we fence the chapter with a summary and recommendations for future researches
    corecore