103 research outputs found

    A hybrid pan-sharpening approach using maximum local extrema

    Get PDF
    Mixing or combining different elements for getting enhanced version, is practiced across various areas in real life. Pan-sharpening is a similar technique used in the digital world; a process to combine two images into a fused image that comprises more detailed information. Images referred herein are Panchromatic (PAN) and Multispectral (MS) images. This paper presents a pansharpening algorithm which integrates multispectral and panchromatic images to generate an improved multispectral image. This technique merges the Discrete wavelet transform (WT) and Intensity-Hue-Saturation (IHS) through separate fusing criterion for choosing an approximate and detail sub-images. Whereas the maximal local extrema are used for merging detail sub-images and finally merged high-resolution image is reconstructed through inverse transform of wavelet and IHS. The proposed fusion approach enhances the superiority of the resultant fused image is demonstrated by quality measures like CORR, RMSE, PFE, SSIM, SNR and PSNR with the help of satellite Worldview-II images. The proposed algorithm is correlated with the other fusion techniques through empirical outcomes proves the superiority of the final merged image in terms of resolutions than the others

    A Trous Wavelet and Image Fusion

    Get PDF

    MASADA USER GUIDE

    Get PDF
    This user guide accompanies the MASADA tool which is a public tool for the detection of built-up areas from remote sensing data. MASADA stands for Massive Spatial Automatic Data Analytics. It has been developed in the frame of the “Global Human Settlement Layer” (GHSL) project of the European Commission’s Joint Research Centre, with the overall objective to support the production of settlement layers at regional scale, by processing high and very high resolution satellite imagery. The tool builds on the Symbolic Machine Learning (SML) classifier; a supervised classification method of remotely sensed data which allows extracting built-up information using a coarse resolution settlement map or a land cover information for learning the classifier. The image classification workflow incorporates radiometric, textural and morphological features as inputs for information extraction. Though being originally developed for built-up areas extraction, the SML classifier is a multi-purpose classifier that can be used for general land cover mapping provided there is an appropriate training data set. The tool supports several types of multispectral optical imagery. It includes ready-to-use workflows for specific sensors, but at the same time, it allows the parametrization and customization of the workflow by the user. Currently it includes predefined workflows for SPOT-5, SPOT-6/7, RapidEye and CBERS-4, but it was also tested with various high and very high resolution1 sensors like GeoEye-1, WorldView-2/3, Pléiades and Quickbird.JRC.E.1-Disaster Risk Managemen

    Multisource Remote Sensing Imagery Fusion Scheme Based on Bidimensional Empirical Mode Decomposition (BEMD) and Its Application to the Extraction of Bamboo Forest

    Get PDF
    Most bamboo forests grow in humid climates in low-latitude tropical or subtropical monsoon areas, and they are generally located in hilly areas. Bamboo trunks are very straight and smooth, which means that bamboo forests have low structural diversity. These features are beneficial to synthetic aperture radar (SAR) microwave penetration and they provide special information in SAR imagery. However, some factors (e.g., foreshortening) can compromise the interpretation of SAR imagery. The fusion of SAR and optical imagery is considered an effective method with which to obtain information on ground objects. However, most relevant research has been based on two types of remote sensing image. This paper proposes a new fusion scheme, which combines three types of image simultaneously, based on two fusion methods: bidimensional empirical mode decomposition (BEMD) and the Gram-Schmidt transform. The fusion of panchromatic and multispectral images based on the Gram-Schmidt transform can enhance spatial resolution while retaining multispectral information. BEMD is an adaptive decomposition method that has been applied widely in the analysis of nonlinear signals and to the nonstable signal of SAR. The fusion of SAR imagery with fused panchromatic and multispectral imagery using BEMD is based on the frequency information of the images. It was established that the proposed fusion scheme is an effective remote sensing image interpretation method, and that the value of entropy and the spatial frequency of the fused images were improved in comparison with other techniques such as the discrete wavelet, à-trous, and non-subsampled contourlet transform methods. Compared with the original image, information entropy of the fusion image based on BEMD improves about 0.13–0.38. Compared with the other three methods it improves about 0.06–0.12. The average gradient of BEMD is 4%–6% greater than for other methods. BEMD maintains spatial frequency 3.2–4.0 higher than other methods. The experimental results showed the proposed fusion scheme could improve the accuracy of bamboo forest classification. Accuracy increased by 12.1%, and inaccuracy was reduced by 11.0%

    Pixel-level Image Fusion Algorithms for Multi-camera Imaging System

    Get PDF
    This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation abilit

    Challenges and Opportunities of Multimodality and Data Fusion in Remote Sensing

    No full text
    International audience—Remote sensing is one of the most common ways to extract relevant information about the Earth and our environment. Remote sensing acquisitions can be done by both active (synthetic aperture radar, LiDAR) and passive (optical and thermal range, multispectral and hyperspectral) devices. According to the sensor, a variety of information about the Earth's surface can be obtained. The data acquired by these sensors can provide information about the structure (optical, synthetic aperture radar), elevation (LiDAR) and material content (multi and hyperspectral) of the objects in the image. Once considered together their comple-mentarity can be helpful for characterizing land use (urban analysis, precision agriculture), damage detection (e.g., in natural disasters such as floods, hurricanes, earthquakes, oil-spills in seas), and give insights to potential exploitation of resources (oil fields, minerals). In addition, repeated acquisitions of a scene at different times allows one to monitor natural resources and environmental variables (vegetation phenology, snow cover), anthropological effects (urban sprawl, deforestation), climate changes (desertification, coastal erosion) among others. In this paper, we sketch the current opportunities and challenges related to the exploitation of multimodal data for Earth observation. This is done by leveraging the outcomes of the Data Fusion contests, organized by the IEEE Geoscience and Remote Sensing Society since 2006. We will report on the outcomes of these contests, presenting the multimodal sets of data made available to the community each year, the targeted applications and an analysis of the submitted methods and results: How was multimodality considered and integrated in the processing chain? What were the improvements/new opportunities offered by the fusion? What were the objectives to be addressed and the reported solutions? And from this, what will be the next challenges

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    Adaptive chebyshev fusion of vegetation imagery based on SVM classifier

    Get PDF
    A novel adaptive image fusion method by using Chebyshev polynomial analysis (CPA), for applications in vegetation satellite imagery, is introduced in this paper. Fusion is a technique that enables the merging of two satellite cameras: panchromatic and multi-spectral, to produce higher quality satellite images to address agricurtural and vegetation issues such as soiling, floods and crop harvesting. Recent studies show Chebyshev polynomials to be effective in image fusion mainly in medium to high noise conditions, as per real-life satellite conditions. However, its application was limited to heuristics. In this research, we have proposed a way to adaptively select the optimal CPA parameters according to user specifications. Support vector machines (SVM) is used as a classifying tool to estimate the noise parameters, from which the appropriate CPA degree is utilised to perform image fusion according to a look-up table. Performance evaluation affirms the approach’s ability in reducing the computational complexity to perform fusion. Overall, adaptive CPA fusion is able to optimize an image fusion system’s resources and processing time. It therefore may be suitably incorporated onto real hardware for use on vegetation satellite imagery

    Sparse Coding Based Feature Representation Method for Remote Sensing Images

    Get PDF
    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further verify the power of the new feature representation method, we applied it to a pan-sharpened image to detect seafloor scars in shallow waters. Propeller scars are formed when boat propellers strike and break apart seagrass beds, resulting in habitat loss. We developed a robust identification system by incorporating morphological filters to detect and map the scars. Our results showed that the proposed method can be implemented on a regular basis to monitor changes in habitat characteristics of coastal waters
    • …
    corecore