455 research outputs found

    Pixel-level Image Fusion Algorithms for Multi-camera Imaging System

    Get PDF
    This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation abilit

    Water bodies' mapping from Sentinel-2 imagery with Modified Normalized Difference Water Index at 10-m spatial resolution produced by sharpening the swir band

    Get PDF
    Monitoring open water bodies accurately is an important and basic application in remote sensing. Various water body mapping approaches have been developed to extract water bodies from multispectral images. The method based on the spectral water index, especially the Modified Normalized Difference Water Index (MDNWI) calculated from the green and Shortwave-Infrared (SWIR) bands, is one of the most popular methods. The recently launched Sentinel-2 satellite can provide fine spatial resolution multispectral images. This new dataset is potentially of important significance for regional water bodies' mapping, due to its free access and frequent revisit capabilities. It is noted that the green and SWIR bands of Sentinel-2 have different spatial resolutions of 10 m and 20 m, respectively. Straightforwardly, MNDWI can be produced from Sentinel-2 at the spatial resolution of 20 m, by upscaling the 10-m green band to 20 m correspondingly. This scheme, however, wastes the detailed information available at the 10-m resolution. In this paper, to take full advantage of the 10-m information provided by Sentinel-2 images, a novel 10-m spatial resolution MNDWI is produced from Sentinel-2 images by downscaling the 20-m resolution SWIR band to 10 m based on pan-sharpening. Four popular pan-sharpening algorithms, including Principle Component Analysis (PCA), Intensity Hue Saturation (IHS), High Pass Filter (HPF) and Ă  Trous Wavelet Transform (ATWT), were applied in this study. The performance of the proposed method was assessed experimentally using a Sentinel-2 image located at the Venice coastland. In the experiment, six water indexes, including 10-m NDWI, 20-m MNDWI and 10-m MNDWI, produced by four pan-sharpening algorithms, were compared. Three levels of results, including the sharpened images, the produced MNDWI images and the finally mapped water bodies, were analysed quantitatively. The results showed that MNDWI can enhance water bodies and suppressbuilt-up features more efficiently than NDWI. Moreover, 10-m MNDWIs produced by all four pan-sharpening algorithms can represent more detailed spatial information of water bodies than 20-m MNDWI produced by the original image. Thus, MNDWIs at the 10-m resolution can extract more accurate water body maps than 10-m NDWI and 20-m MNDWI. In addition, although HPF can produce more accurate sharpened images and MNDWI images than the other three benchmark pan-sharpening algorithms, the ATWT algorithm leads to the best 10-m water bodies mapping results. This is no necessary positive connection between the accuracy of the sharpened MNDWI image and the map-level accuracy of the resultant water body maps

    Fused LISS IV Image Classification using Deep Convolution Neural Networks

    Get PDF
    These days, earth observation frameworks give a large number of heterogeneous remote sensing information. The most effective method to oversee such fulsomeness in utilizing its reciprocity is a vital test in current remote sensing investigation. Considering optical Very High Spatial Resolution (VHSR) images, satellites acquire both Multi Spectral (MS) and panchromatic (PAN) images at various spatial goals. Information fusion procedures manage this by proposing a technique to consolidate reciprocity among the various information sensors. Classification of remote sensing image by Deep learning techniques using Convolutional Neural Networks (CNN) is increasing a solid decent footing because of promising outcomes. The most significant attribute of CNN-based strategies is that earlier element extraction is not required which prompts great speculation capacities. In this article, we are proposing a novel Deep learning based SMDTR-CNN (Same Model with Different Training Round with Convolution Neural Network) approach for classifying fused (LISS IV + PAN) image next to image fusion. The fusion of remote sensing images from CARTOSAT-1 (PAN image) and IRS P6 (LISS IV image) sensor is obtained by Quantization Index Modulation with Discrete Contourlet Transform (QIM-DCT). For enhancing the image fusion execution, we remove specific commotions utilizing Bayesian channel by Adaptive Type-2 Fuzzy System. The outcomes of the proposed procedures are evaluated with respect to precision, classification accuracy and kappa coefficient. The results revealed that SMDTR-CNN with Deep Learning got the best all-around precision and kappa coefficient. Likewise, the accuracy of each class of fused images in LISS IV + PAN dataset is improved by 2% and 5%, respectively

    Sparse Coding Based Feature Representation Method for Remote Sensing Images

    Get PDF
    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further verify the power of the new feature representation method, we applied it to a pan-sharpened image to detect seafloor scars in shallow waters. Propeller scars are formed when boat propellers strike and break apart seagrass beds, resulting in habitat loss. We developed a robust identification system by incorporating morphological filters to detect and map the scars. Our results showed that the proposed method can be implemented on a regular basis to monitor changes in habitat characteristics of coastal waters

    Comparison of Pansharpening Algorithms: Outcome of the 2006 GRS-S Data Fusion Contest

    No full text
    International audienceIn January 2006, the Data Fusion Committee of the IEEE Geoscience and Remote Sensing Society launched a public contest for pansharpening algorithms, which aimed to identify the ones that perform best. Seven research groups worldwide participated in the contest, testing eight algorithms following different philosophies [component substitution, multiresolution analysis (MRA), detail injection, etc.]. Several complete data sets from two different sensors, namely, QuickBird and simulated Pléiades, were delivered to all participants. The fusion results were collected and evaluated, both visually and objectively. Quantitative results of pansharpening were possible owing to the availability of reference originals obtained either by simulating the data collected from the satellite sensor by means of higher resolution data from an airborne platform, in the case of the Pléiades data, or by first degrading all the available data to a coarser resolution and saving the original as the reference, in the case of the QuickBird data. The evaluation results were presented during the special session on Data Fusion at the 2006 International Geoscience and Remote Sensing Symposium in Denver, and these are discussed in further detail in this paper. Two algorithms outperform all the others, the visual analysis being confirmed by the quantitative evaluation. These two methods share the same philosophy: they basically rely on MRA and employ adaptive models for the injection of high-pass details

    Integrasi Discrete Wavelet Transform dan Singular Value Decomposition pada Watermarking Citra untuk Perlindungan Hak Cipta

    Full text link
    Tren masalah watermarking pada sekarang ini adalah bagaimana mengoptimalkan trade-off antara imperceptibility (visibilitas) citra ter-watermark terhadap pengaruh distorsi dan robustness terhadap penyisipan watermark. Masalah menggunakan kekuatan penyisipan berdasarkan Single Scaling Factor (SSF) atau Multiple Scaling Factor (MSF) juga ditemukan. Penelitian ini mengusulkan metode penyisipan watermark untuk perlindungan hak cipta pada citra dan algoritma ekstraksi citra ter-watermark yang dioptimalkan dengan penggabungan Discrete Wavelet Transform (DWT) dan Singular Value Decomposition (SVD). Nilai-nilai singular dari LL3 koefisien sub-band dari citra host dimodifikasi menggunakan nilai tunggal citra watermark biner menggunakan MSFs. Kontribusi utama dari skema yang diusulkan adalah aplikasi DWT-SVD untuk mengidentifikasi beberapa faktor skala yang optimal. Hasil penelitian menunjukkan bahwa skema yang diusulkan menghasilkan nilai Peak Signal to Noise Ratio (PSNR) yang tinggi, yang menunjukkan bahwa kualitas visual gambar yang baik pada masalah citra watermarking telah mengoptimalkan trade-off. Trade-off antara imperceptibility (visibilitas) citra ter-watermark terhadap pengaruh distorsi dan robustness citra ter-watermark terhadap operasi pengolahan citra. Nilai PSNR yang didapat pada citra yang diujikan: baboon=53,184; boat=53,328; cameraman=53,700; lena=53,668; man=53,328; dan pepper sebesar 52,662. Delapan perlakuan khusus pada hasil citra ter-watermark diujikan dan diekstraksi kembali yaitu JPEG 5%, Noise 5%, Gaussian filter 3x3, Sharpening, Histogram Equalization, Scaling 512-256, Gray Quantitation 1bit, dan Cropping 1/8. Hasil dari perlakuan khusus kemudian diukur nilai Normalized Cross-Correlation (NC) yang menghasilkan rata-rata semua citra diperoleh sebesar 0,999 dari satu. Hasil penelitian dari metode yang diusulkan lebih unggul nilai PSNR dan NC dari penelitian sebelumnya. Jadi dapat disimpulkan bahwa penerapan dengan metode DWT-SVD ini mampu menghasilkan citra yang robust namun memiliki tingkat imperceptibility yang cukup tinggi

    Radiometrically-Accurate Hyperspectral Data Sharpening

    Get PDF
    Improving the spatial resolution of hyperpsectral image (HSI) has traditionally been an important topic in the field of remote sensing. Many approaches have been proposed based on various theories including component substitution, multiresolution analysis, spectral unmixing, Bayesian probability, and tensor representation. However, these methods have some common disadvantages, such as that they are not robust to different up-scale ratios and they have little concern for the per-pixel radiometric accuracy of the sharpened image. Moreover, many learning-based methods have been proposed through decades of innovations, but most of them require a large set of training pairs, which is unpractical for many real problems. To solve these problems, we firstly proposed an unsupervised Laplacian Pyramid Fusion Network (LPFNet) to generate a radiometrically-accurate high-resolution HSI. First, with the low-resolution hyperspectral image (LR-HSI) and the high-resolution multispectral image (HR-MSI), the preliminary high-resolution hyperspectral image (HR-HSI) is calculated via linear regression. Next, the high-frequency details of the preliminary HR-HSI are estimated via the subtraction between it and the CNN-generated-blurry version. By injecting the details to the output of the generative CNN with the low-resolution hyperspectral image (LR-HSI) as input, the final HR-HSI is obtained. LPFNet is designed for fusing the LR-HSI and HR-MSI covers the same Visible-Near-Infrared (VNIR) bands, while the short-wave infrared (SWIR) bands of HSI are ignored. SWIR bands are equally important to VNIR bands, but their spatial details are more challenging to be enhanced because the HR-MSI, used to provide the spatial details in the fusion process, usually has no SWIR coverage or lower-spatial-resolution SWIR. To this end, we designed an unsupervised cascade fusion network (UCFNet) to sharpen the Vis-NIR-SWIR LR-HSI. First, the preliminary high-resolution VNIR hyperspectral image (HR-VNIR-HSI) is obtained with a conventional hyperspectral algorithm. Then, the HR-MSI, the preliminary HR-VNIR-HSI, and the LR-SWIR-HSI are passed to the generative convolutional neural network to produce an HR-HSI. In the training process, the cascade sharpening method is employed to improve stability. Furthermore, the self-supervising loss is introduced based on the cascade strategy to further improve the spectral accuracy. Experiments are conducted on both LPFNet and UCFNet with different datasets and up-scale ratios. Also, state-of-the-art baseline methods are implemented and compared with the proposed methods with different quantitative metrics. Results demonstrate that proposed methods outperform the competitors in all cases in terms of spectral and spatial accuracy

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included
    • …
    corecore