14 research outputs found

    A Novel Multimodal Image Fusion Method Using Hybrid Wavelet-based Contourlet Transform

    Full text link
    Various image fusion techniques have been studied to meet the requirements of different applications such as concealed weapon detection, remote sensing, urban mapping, surveillance and medical imaging. Combining two or more images of the same scene or object produces a better application-wise visible image. The conventional wavelet transform (WT) has been widely used in the field of image fusion due to its advantages, including multi-scale framework and capability of isolating discontinuities at object edges. However, the contourlet transform (CT) has been recently adopted and applied to the image fusion process to overcome the drawbacks of WT with its own advantages. Based on the experimental studies in this dissertation, it is proven that the contourlet transform is more suitable than the conventional wavelet transform in performing the image fusion. However, it is important to know that the contourlet transform also has major drawbacks. First, the contourlet transform framework does not provide shift-invariance and structural information of the source images that are necessary to enhance the fusion performance. Second, unwanted artifacts are produced during the image decomposition process via contourlet transform framework, which are caused by setting some transform coefficients to zero for nonlinear approximation. In this dissertation, a novel fusion method using hybrid wavelet-based contourlet transform (HWCT) is proposed to overcome the drawbacks of both conventional wavelet and contourlet transforms, and enhance the fusion performance. In the proposed method, Daubechies Complex Wavelet Transform (DCxWT) is employed to provide both shift-invariance and structural information, and Hybrid Directional Filter Bank (HDFB) is used to achieve less artifacts and more directional information. DCxWT provides shift-invariance which is desired during the fusion process to avoid mis-registration problem. Without the shift-invariance, source images are mis-registered and non-aligned to each other; therefore, the fusion results are significantly degraded. DCxWT also provides structural information through its imaginary part of wavelet coefficients; hence, it is possible to preserve more relevant information during the fusion process and this gives better representation of the fused image. Moreover, HDFB is applied to the fusion framework where the source images are decomposed to provide abundant directional information, less complexity, and reduced artifacts. The proposed method is applied to five different categories of the multimodal image fusion, and experimental study is conducted to evaluate the performance of the proposed method in each multimodal fusion category using suitable quality metrics. Various datasets, fusion algorithms, pre-processing techniques and quality metrics are used for each fusion category. From every experimental study and analysis in each fusion category, the proposed method produced better fusion results than the conventional wavelet and contourlet transforms; therefore, its usefulness as a fusion method has been validated and its high performance has been verified

    A Study on Image Enhancement Techniques using YCbCr Color Space Methods

    Full text link
    We propose an image enhancement scheme by using YCBCR color space method. It shows the better feature of the processed input image. The acquired images are classified into three types, word document image, MRI image and scenery image. At first, the acquired inputs are converted to the gray scale to plot with the normalized histogram. Then, using the color space methods, the images are converted into YCBCR characteristics and there components are separated into individual modules(Y, CB, CR components). The processed image separates its in-features of luminance and chrominance components such as Y component, CB component and CR component. In Gray scale image, the Y is said to be the luminance feature also known as single component. In Color image, CB and CR is said to be the chromaticity of blue and red components. Further we find Hue, Saturation and Intensity components are classified from the same samples. Then the proposed technique shows its better performance than the other methods in the enhancement of images corrupted by Gaussian noise. The Experimental result shows that the proposed methods makes good enhancement in visual quality

    Radiometrically-Accurate Hyperspectral Data Sharpening

    Get PDF
    Improving the spatial resolution of hyperpsectral image (HSI) has traditionally been an important topic in the field of remote sensing. Many approaches have been proposed based on various theories including component substitution, multiresolution analysis, spectral unmixing, Bayesian probability, and tensor representation. However, these methods have some common disadvantages, such as that they are not robust to different up-scale ratios and they have little concern for the per-pixel radiometric accuracy of the sharpened image. Moreover, many learning-based methods have been proposed through decades of innovations, but most of them require a large set of training pairs, which is unpractical for many real problems. To solve these problems, we firstly proposed an unsupervised Laplacian Pyramid Fusion Network (LPFNet) to generate a radiometrically-accurate high-resolution HSI. First, with the low-resolution hyperspectral image (LR-HSI) and the high-resolution multispectral image (HR-MSI), the preliminary high-resolution hyperspectral image (HR-HSI) is calculated via linear regression. Next, the high-frequency details of the preliminary HR-HSI are estimated via the subtraction between it and the CNN-generated-blurry version. By injecting the details to the output of the generative CNN with the low-resolution hyperspectral image (LR-HSI) as input, the final HR-HSI is obtained. LPFNet is designed for fusing the LR-HSI and HR-MSI covers the same Visible-Near-Infrared (VNIR) bands, while the short-wave infrared (SWIR) bands of HSI are ignored. SWIR bands are equally important to VNIR bands, but their spatial details are more challenging to be enhanced because the HR-MSI, used to provide the spatial details in the fusion process, usually has no SWIR coverage or lower-spatial-resolution SWIR. To this end, we designed an unsupervised cascade fusion network (UCFNet) to sharpen the Vis-NIR-SWIR LR-HSI. First, the preliminary high-resolution VNIR hyperspectral image (HR-VNIR-HSI) is obtained with a conventional hyperspectral algorithm. Then, the HR-MSI, the preliminary HR-VNIR-HSI, and the LR-SWIR-HSI are passed to the generative convolutional neural network to produce an HR-HSI. In the training process, the cascade sharpening method is employed to improve stability. Furthermore, the self-supervising loss is introduced based on the cascade strategy to further improve the spectral accuracy. Experiments are conducted on both LPFNet and UCFNet with different datasets and up-scale ratios. Also, state-of-the-art baseline methods are implemented and compared with the proposed methods with different quantitative metrics. Results demonstrate that proposed methods outperform the competitors in all cases in terms of spectral and spatial accuracy

    UAV data modeling for geoinformation update

    Get PDF
    A dissertação visa avaliar a relevância e o desempenho dos dados obtidos por Veículos Aéreos Não Tripulados (VANT) na atualização de Geoinformação. Os dados obtidos por VANT serão utilizados quer em conjunto com outros dados – obtidos por plataformas tradicionais de deteção remota –, quer isoladamente, recorrendo à técnica de Structure from Motion (SfM), para gerar o modelo digital de superfície e os ortomosaicos de alta precisão em diferentes momentos. Para a avaliação da precisão dos dados, os modelos digitais de terreno serão comparados. Por outro lado, os dados e informação gerados permitirão atualizar Geoinformação e quantificar as mudanças ocorridas no uso e ocupação do solo. Os resultados irão alimentar a discussão crítica da ação antrópica nos aglomerados urbanos e as propostas de intervenção.The dissertation aims to assess the relevance and performance of data obtained by Unmanned Aerial Vehicles (UAVs) in updating Geoinformation. The data obtained by UAVs will be used either in conjunction with other data – obtained by traditional remote sensing platforms – or on its own, using the Structure from Motion (SfM) technique, to generate high-precision digital surface models and orthomosaics at different times. For the accuracy assessment of the data, the digital terrain models will be compared. On the other hand, the data and information generated will make it possible to update Geoinformation and quantify changes in land use and occupation. The results will feed the critical discussion of anthropic action in urban areas and intervention proposals

    a critical examination and new developments

    Get PDF
    2012-2013Remote sensing consists in measuring some characteristics of an object from a distance. A key example of remote sensing is the Earth observation from sensors mounted on satellites that is a crucial aspect of space programs. The first satellite used for Earth observation was Explorer VII. It has been followed by thousands of satellites, many of which are still working. Due to the availability of a large number of different sensors and the subsequent huge amount of data collected, the idea of obtaining improved products by means of fusion algorithms is becoming more intriguing. Data fusion is often exploited for indicating the process of integrating multiple data and knowledge related to the same real-world scene into a consistent, accurate, and useful representation. This term is very generic and it includes different levels of fusion. This dissertation is focused on the low level data fusion, which consists in combining several sources of raw data. In this field, one of the most relevant scientific application is surely the Pansharpening. Pansharpening refers to the fusion of a panchromatic image (a single band that covers the visible and near infrared spectrum) and a multispectral/hyperspectral image (tens/hundreds bands) acquired on the same area. [edited by author]XII ciclo n.s

    Fusion de données provenant de différents capteurs satellitaires pour le suivi de la qualité de l'eau en zones côtières. Application au littoral de la région PACA

    Get PDF
    Monitoring coastal areas requires both a good spatial resolution, good spectral resolution associated with agood signal to noise ratio and finally a good temporal resolution to visualize rapid changes in water color.Available now, and even those planed soon, sensors do not provide both a good spatial, spectral ANDtemporal resolution. In this study, we are interested in the image fusion of two future sensors which are bothpart of the Copernicus program of the European Space Agency: MSI on Sentinel-2 and OLCI on Sentinel-3.Such as MSI and OLCI do not provide image yet, it was necessary to simulate them. We then used thehyperspectral imager HICO and we then proposed three methods: an adaptation of the method ARSIS fusionof multispectral images (ARSIS), a fusion method based on the non-negative factorization tensors (Tensor)and a fusion method based on the inversion de matrices (Inversion).These three methods were first evaluated using statistical parameters between images obtained by fusionand the "perfect" image as well as the estimation results of biophysical parameters obtained by minimizingthe radiative transfer model in water.Le suivi des zones côtières nécessite à la fois une bonne résolution spatiale, une bonne résolution spectraleassociée à un bon rapport signal sur bruit et enfin une bonne résolution temporelle pour visualiser deschangements rapides de couleur de l’eau.Les capteurs disponibles actuellement, et même ceux prévus prochainement, n’apportent pas à la fois unebonne résolution spatiale, spectrale ET temporelle. Dans cette étude, nous nous intéressons à la fusion de 2futurs capteurs qui s’inscrivent tous deux dans le programme Copernicus de l’agence spatiale européenne:MSI sur Sentinel-2 et OLCI sur Sentinel-3.Comme les capteurs MSI et OLCI ne fournissent pas encore d’images, il a fallu les simuler. Pour cela nousavons eu recours aux images hyperspectrales du capteur HICO. Nous avons alors proposé 3 méthodes : uneadaptation de la méthode ARSIS à la fusion d’images multispectrales (ARSIS), une méthode de fusion baséesur la factorisation de tenseurs non-négatifs (Tenseur) et une méthode de fusion basée sur l’inversion dematrices (Inversion)Ces 3 méthodes ont tout d’abord été évaluées à l’aide de paramètres statistiques entre les images obtenuespar fusion et l’image « parfaite » ainsi que sur les résultats d’estimation de paramètres biophysiques obtenuspar minimisation du modèle de transfert radiatif dans l’eau
    corecore