3,723 research outputs found

    Satellite imagery fusion with an equalized trade-off between spectral and spatial quality

    Get PDF
    En este trabajo se propone una estrategia para obtener imágenes fusionadas con calidad espacial y espectral equilibradas. Esta estrategia está basada en una representación conjunta MultiDirección-MultiRresolución (MDMR), definida a partir de un banco de filtros direccional de paso bajo, complementada con una metodología de búsqueda orientada de los valores de los parámetros de diseño de este banco de filtros. La metodología de búsqueda es de carácter estocástico y optimiza una función objetivo asociada a la medida de la calidad espacial y espectral de la imagen fusionada. Los resultados obtenidos, muestran que un número pequeño de iteraciones del algoritmo de búsqueda propuesto, proporciona valores de los parámetros del banco de filtro que permiten obtener imágenes fusionadas con una calidad espectral superior a la de otros métodos investigados, manteniendo su calidad espacial

    Multi-sensor Image Data Fusion based on Pixel-Level Weights of Wavelet and the PCA Transform

    Get PDF
    Abstract -The goal of image fusion is to create new images that are more suitable for the purposes of human visual perception, object detection and target recognition. For Automatic Target Recognition (ATR), we can use multi-sensor data including visible and infrared images to increase the recognition rate. In this paper, we propose a new multiresolution data fusion scheme based on the principal component analysis (PCA) transform and the pixel-level weights wavelet transform including thermal weights and visual weights. In order to get a more ideal fusion result, a linear local mapping which based on the PCA is used to create a new "origin" image of the image fusion. We use multiresolution decompositions to represent the input images at different scales, present a multiresolution/ multimodal segmentation to partition the image domain at these scales. The crucial idea is to use this segmentation to guide the fusion process. Physical thermal weights and perceptive visual weights are used as segmentation multimodals. Daubechies Wavelet is choosen as the Wavelet Basis. Experimental results confirm that the proposed algorithm is the best image sharpening method and can best maintain the spectral information of the original infrared image. Also, the proposed technique performs better than the other ones in the literature, more robust and effective, from both subjective visual effects and objective statistical analysis results

    Target-adaptive CNN-based pansharpening

    Full text link
    We recently proposed a convolutional neural network (CNN) for remote sensing image pansharpening obtaining a significant performance gain over the state of the art. In this paper, we explore a number of architectural and training variations to this baseline, achieving further performance gains with a lightweight network which trains very fast. Leveraging on this latter property, we propose a target-adaptive usage modality which ensures a very good performance also in the presence of a mismatch w.r.t. the training set, and even across different sensors. The proposed method, published online as an off-the-shelf software tool, allows users to perform fast and high-quality CNN-based pansharpening of their own target images on general-purpose hardware

    Image Fusion via Sparse Regularization with Non-Convex Penalties

    Full text link
    The L1 norm regularized least squares method is often used for finding sparse approximate solutions and is widely used in 1-D signal restoration. Basis pursuit denoising (BPD) performs noise reduction in this way. However, the shortcoming of using L1 norm regularization is the underestimation of the true solution. Recently, a class of non-convex penalties have been proposed to improve this situation. This kind of penalty function is non-convex itself, but preserves the convexity property of the whole cost function. This approach has been confirmed to offer good performance in 1-D signal denoising. This paper demonstrates the aforementioned method to 2-D signals (images) and applies it to multisensor image fusion. The problem is posed as an inverse one and a corresponding cost function is judiciously designed to include two data attachment terms. The whole cost function is proved to be convex upon suitably choosing the non-convex penalty, so that the cost function minimization can be tackled by convex optimization approaches, which comprise simple computations. The performance of the proposed method is benchmarked against a number of state-of-the-art image fusion techniques and superior performance is demonstrated both visually and in terms of various assessment measures

    Quality assessment by region in spot images fused by means dual-tree complex wavelet transform

    Get PDF
    This work is motivated in providing and evaluating a fusion algorithm of remotely sensed images, i.e. the fusion of a high spatial resolution panchromatic image with a multi-spectral image (also known as pansharpening) using the dual-tree complex wavelet transform (DT-CWT), an effective approach for conducting an analytic and oversampled wavelet transform to reduce aliasing, and in turn reduce shift dependence of the wavelet transform. The proposed scheme includes the definition of a model to establish how information will be extracted from the PAN band and how that information will be injected into the MS bands with low spatial resolution. The approach was applied to Spot 5 images where there are bands falling outside PAN’s spectrum. We propose an optional step in the quality evaluation protocol, which is to study the quality of the merger by regions, where each region represents a specific feature of the image. The results show that DT-CWT based approach offers good spatial quality while retaining the spectral information of original images, case SPOT 5. The additional step facilitates the identification of the most affected regions by the fusion process

    Computer vision techniques for forest fire perception

    Get PDF
    This paper presents computer vision techniques for forest fire perception involving measurement of forest fire properties (fire front, flame height, flame inclination angle, fire base width) required for the implementation of advanced forest fire-fighting strategies. The system computes a 3D perception model of the fire and could also be used for visualizing the fire evolution in remote computer systems. The presented system integrates the processing of images from visual and infrared cameras. It applies sensor fusion techniques involving also telemetry sensors, and GPS. The paper also includes some results of forest fire experiments.European Commission EVG1-CT-2001-00043European Commission IST-2001-34304Ministerio de Educación y Ciencia DPI2005-0229
    corecore