22 research outputs found

    Multisource Remote Sensing Imagery Fusion Scheme Based on Bidimensional Empirical Mode Decomposition (BEMD) and Its Application to the Extraction of Bamboo Forest

    Get PDF
    Most bamboo forests grow in humid climates in low-latitude tropical or subtropical monsoon areas, and they are generally located in hilly areas. Bamboo trunks are very straight and smooth, which means that bamboo forests have low structural diversity. These features are beneficial to synthetic aperture radar (SAR) microwave penetration and they provide special information in SAR imagery. However, some factors (e.g., foreshortening) can compromise the interpretation of SAR imagery. The fusion of SAR and optical imagery is considered an effective method with which to obtain information on ground objects. However, most relevant research has been based on two types of remote sensing image. This paper proposes a new fusion scheme, which combines three types of image simultaneously, based on two fusion methods: bidimensional empirical mode decomposition (BEMD) and the Gram-Schmidt transform. The fusion of panchromatic and multispectral images based on the Gram-Schmidt transform can enhance spatial resolution while retaining multispectral information. BEMD is an adaptive decomposition method that has been applied widely in the analysis of nonlinear signals and to the nonstable signal of SAR. The fusion of SAR imagery with fused panchromatic and multispectral imagery using BEMD is based on the frequency information of the images. It was established that the proposed fusion scheme is an effective remote sensing image interpretation method, and that the value of entropy and the spatial frequency of the fused images were improved in comparison with other techniques such as the discrete wavelet, à-trous, and non-subsampled contourlet transform methods. Compared with the original image, information entropy of the fusion image based on BEMD improves about 0.13–0.38. Compared with the other three methods it improves about 0.06–0.12. The average gradient of BEMD is 4%–6% greater than for other methods. BEMD maintains spatial frequency 3.2–4.0 higher than other methods. The experimental results showed the proposed fusion scheme could improve the accuracy of bamboo forest classification. Accuracy increased by 12.1%, and inaccuracy was reduced by 11.0%

    Image fusion using multivariate and multidimensional EMD.

    Get PDF
    We present a novel methodology for the fusion of multiple (two or more) images using the multivariate extension of empirical mode decomposition (MEMD). Empirical mode decomposition (EMD) is a data-driven method which decomposes input data into its intrinsic oscillatory modes, known as intrinsic mode functions (IMFs), without making a priori assumptions regarding the data. We show that the multivariate and multidimensional extensions of EMD are suitable for image fusion purposes. We further demonstrate that while multidimensional extensions, by design, may seem more appropriate for tasks related to image processing, the proposed multivariate extension outperforms these in image fusion applications owing to its mode-alignment property for IMFs. Case studies involving multi-focus image fusion and pan-sharpening of multi-spectral images are presented to demonstrate the effectiveness of the proposed method

    Image Simulation in Remote Sensing

    Get PDF
    Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers

    Region-Based Image-Fusion Framework for Compressive Imaging

    Get PDF
    A novel region-based image-fusion framework for compressive imaging (CI) and its implementation scheme are proposed. Unlike previous works on conventional image fusion, we consider both compression capability on sensor side and intelligent understanding of the image contents in the image fusion. Firstly, the compressed sensing theory and normalized cut theory are introduced. Then region-based image-fusion framework for compressive imaging is proposed and its corresponding fusion scheme is constructed. Experiment results demonstrate that the proposed scheme delivers superior performance over traditional compressive image-fusion schemes in terms of both object metrics and visual quality

    Kernel Feature Extraction Methods for Remote Sensing Data Analysis

    Get PDF
    Technological advances in the last decades have improved our capabilities of collecting and storing high data volumes. However, this makes that in some fields, such as remote sensing several problems are generated in the data processing due to the peculiar characteristics of their data. High data volume, high dimensionality, heterogeneity and their nonlinearity, make that the analysis and extraction of relevant information from these images could be a bottleneck for many real applications. The research applying image processing and machine learning techniques along with feature extraction, allows the reduction of the data dimensionality while keeps the maximum information. Therefore, developments and applications of feature extraction methodologies using these techniques have increased exponentially in remote sensing. This improves the data visualization and the knowledge discovery. Several feature extraction methods have been addressed in the literature depending on the data availability, which can be classified in supervised, semisupervised and unsupervised. In particular, feature extraction can use in combination with kernel methods (nonlinear). The process for obtaining a space that keeps greater information content is facilitated by this combination. One of the most important properties of the combination is that can be directly used for general tasks including classification, regression, clustering, ranking, compression, or data visualization. In this Thesis, we address the problems of different nonlinear feature extraction approaches based on kernel methods for remote sensing data analysis. Several improvements to the current feature extraction methods are proposed to transform the data in order to make high dimensional data tasks easier, such as classification or biophysical parameter estimation. This Thesis focus on three main objectives to reach these improvements in the current feature extraction methods: The first objective is to include invariances into supervised kernel feature extraction methods. Throughout these invariances it is possible to generate virtual samples that help to mitigate the problem of the reduced number of samples in supervised methods. The proposed algorithm is a simple method that essentially generates new (synthetic) training samples from available labeled samples. These samples along with original samples should be used in feature extraction methods obtaining more independent features between them that without virtual samples. The introduction of prior knowledge by means of the virtual samples could obtain classification and biophysical parameter estimation methods more robust than without them. The second objective is to use the generative kernels, i.e. probabilistic kernels, that directly learn by means of clustering techniques from original data by finding local-to-global similarities along the manifold. The proposed kernel is useful for general feature extraction purposes. Furthermore, the kernel attempts to improve the current methods because the kernel not only contains labeled data information but also uses the unlabeled information of the manifold. Moreover, the proposed kernel is parameter free in contrast with the parameterized functions such as, the radial basis function (RBF). Using probabilistic kernels is sought to obtain new unsupervised and semisupervised methods in order to reduce the number and cost of labeled data in remote sensing. Third objective is to develop new kernel feature extraction methods for improving the features obtained by the current methods. Optimizing the functional could obtain improvements in new algorithm. For instance, the Optimized Kernel Entropy Component Analysis (OKECA) method. The method is based on the Independent Component Analysis (ICA) framework resulting more efficient than the standard Kernel Entropy Component Analysis (KECA) method in terms of dimensionality reduction. In this Thesis, the methods are focused on remote sensing data analysis. Nevertheless, feature extraction methods are used to analyze data of several research fields whereas data are multidimensional. For these reasons, the results are illustrated into experimental sequence. First, the projections are analyzed by means of Toy examples. The algorithms are tested through standard databases with supervised information to proceed to the last step, the analysis of remote sensing images by the proposed methods

    Archaeological 3D GIS

    Get PDF
    Archaeological 3D GIS provides archaeologists with a guide to explore and understand the unprecedented opportunities for collecting, visualising, and analysing archaeological datasets in three dimensions. With platforms allowing archaeologists to link, query, and analyse in a virtual, georeferenced space information collected by different specialists, the book highlights how it is possible to re-think aspects of theory and practice which relate to GIS. It explores which questions can be addressed in such a new environment and how they are going to impact the way we interpret the past. By using material from several international case studies such as Pompeii, Çatalhöyük, as well as prehistoric and protohistoric sites in Southern Scandinavia, this book discusses the use of the third dimension in support of archaeological practice. This book will be essential for researchers and scholars who focus on archaeology and spatial analysis, and is designed and structured to serve as a textbook for GIS and digital archaeology courses

    Archaeological 3D GIS

    Get PDF
    Archaeological 3D GIS provides archaeologists with a guide to explore and understand the unprecedented opportunities for collecting, visualising, and analysing archaeological datasets in three dimensions. With platforms allowing archaeologists to link, query, and analyse in a virtual, georeferenced space information collected by different specialists, the book highlights how it is possible to re-think aspects of theory and practice which relate to GIS. It explores which questions can be addressed in such a new environment and how they are going to impact the way we interpret the past. By using material from several international case studies such as Pompeii, Çatalhöyük, as well as prehistoric and protohistoric sites in Southern Scandinavia, this book discusses the use of the third dimension in support of archaeological practice. This book will be essential for researchers and scholars who focus on archaeology and spatial analysis, and is designed and structured to serve as a textbook for GIS and digital archaeology courses
    corecore