635 research outputs found

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    MASADA USER GUIDE

    Get PDF
    This user guide accompanies the MASADA tool which is a public tool for the detection of built-up areas from remote sensing data. MASADA stands for Massive Spatial Automatic Data Analytics. It has been developed in the frame of the “Global Human Settlement Layer” (GHSL) project of the European Commission’s Joint Research Centre, with the overall objective to support the production of settlement layers at regional scale, by processing high and very high resolution satellite imagery. The tool builds on the Symbolic Machine Learning (SML) classifier; a supervised classification method of remotely sensed data which allows extracting built-up information using a coarse resolution settlement map or a land cover information for learning the classifier. The image classification workflow incorporates radiometric, textural and morphological features as inputs for information extraction. Though being originally developed for built-up areas extraction, the SML classifier is a multi-purpose classifier that can be used for general land cover mapping provided there is an appropriate training data set. The tool supports several types of multispectral optical imagery. It includes ready-to-use workflows for specific sensors, but at the same time, it allows the parametrization and customization of the workflow by the user. Currently it includes predefined workflows for SPOT-5, SPOT-6/7, RapidEye and CBERS-4, but it was also tested with various high and very high resolution1 sensors like GeoEye-1, WorldView-2/3, Pléiades and Quickbird.JRC.E.1-Disaster Risk Managemen

    Satellite Image Fusion in Various Domains

    Full text link
    In order to find out the fusion algorithm which is best suited for the panchromatic and multispectral images, fusion algorithms, such as PCA and wavelet algorithms have been employed and analyzed. In this paper, performance evaluation criteria are also used for quantitative assessment of the fusion performance. The spectral quality of fused images is evaluated by the ERGAS and Q4. The analysis indicates that the DWT fusion scheme has the best definition as well as spectral fidelity, and has better performance with regard to the high textural information absorption. Therefore, as the study area is concerned, it is most suited for the panchromatic and multispectral image fusion. an image fusion algorithm based on wavelet transform is proposed for Multispectral and panchromatic satellite image by using fusion in spatial and transform domains. In the proposed scheme, the images to be processed are decomposed into sub-images with the same resolution at same levels and different resolution at different levels and then the information fusion is performed using high-frequency sub-images under the Multi-resolution image fusion scheme based on wavelets produces better fused image than that by the MS or WA schemes

    Low-Light Hyperspectral Image Enhancement

    Full text link
    Due to inadequate energy captured by the hyperspectral camera sensor in poor illumination conditions, low-light hyperspectral images (HSIs) usually suffer from low visibility, spectral distortion, and various noises. A range of HSI restoration methods have been developed, yet their effectiveness in enhancing low-light HSIs is constrained. This work focuses on the low-light HSI enhancement task, which aims to reveal the spatial-spectral information hidden in darkened areas. To facilitate the development of low-light HSI processing, we collect a low-light HSI (LHSI) dataset of both indoor and outdoor scenes. Based on Laplacian pyramid decomposition and reconstruction, we developed an end-to-end data-driven low-light HSI enhancement (HSIE) approach trained on the LHSI dataset. With the observation that illumination is related to the low-frequency component of HSI, while textural details are closely correlated to the high-frequency component, the proposed HSIE is designed to have two branches. The illumination enhancement branch is adopted to enlighten the low-frequency component with reduced resolution. The high-frequency refinement branch is utilized for refining the high-frequency component via a predicted mask. In addition, to improve information flow and boost performance, we introduce an effective channel attention block (CAB) with residual dense connection, which served as the basic block of the illumination enhancement branch. The effectiveness and efficiency of HSIE both in quantitative assessment measures and visual effects are demonstrated by experimental results on the LHSI dataset. According to the classification performance on the remote sensing Indian Pines dataset, downstream tasks benefit from the enhanced HSI. Datasets and codes are available: \href{https://github.com/guanguanboy/HSIE}{https://github.com/guanguanboy/HSIE}

    Multi-Sensor Image Registration, Fusion and Dimension Reduction

    Get PDF
    With the development of future spacecraft formations comes a number of complex challenges such as maintaining precise relative position and specified attitudes, as well as being able to communicate with each other. More generally, with the advent of spacecraft formations, issues related to performing on-board and automatic data computing and analysis as well as decision planning and scheduling will figure among the most important requirements. Among those, automatic image registration, image fusion and dimension reduction represent intelligent technologies that would reduce mission costs,would enable autonomous decisions to be taken on-board, and would make formation flying adaptive, self-reliant, and cooperative. For both on-board and on-the-ground applications, the particular need for dimension reduction is two-fold, first to reduce the communication bandwidth, second as a pre-processing to make computations feasible,simpler and faster

    Intelligent imaging systems for automotive applications

    Get PDF
    In common with many other application areas, visual signals are becoming an increasingly important information source for many automotive applications. For several years CCD cameras have been used as research tools for a range of automotive applications. Infrared cameras, RADAR and LIDAR are other types of imaging sensors that have also been widely investigated for use in cars. This paper will describe work in this field performed in C2VIP over the last decade - starting with Night Vision Systems and looking at various other Advanced Driver Assistance Systems. Emerging from this experience, we make the following observations which are crucial for "intelligent" imaging systems: 1. Careful arrangement of sensor array. 2. Dynamic-Self-Calibration. 3. Networking and processing. 4. Fusion with other imaging sensors, both at the image level and the feature level, provides much more flexibility and reliability in complex situations. We will discuss how these problems can be addressed and what are the outstanding issue

    Blending of Images Using Discrete Wavelet Transform

    Get PDF
    The project presents multi focus image fusion using discrete wavelet transform with local directional pattern and spatial frequency analysis. Multi focus image fusion in wireless visual sensor networks is a process of blending two or more images to get a new one which has a more accurate description of the scene than the individual source images. In this project, the proposed model utilizes the multi scale decomposition done by discrete wavelet transform for fusing the images in its frequency domain. It decomposes an image into two different components like structural and textural information. It doesn’t down sample the image while transforming into frequency domain. So it preserves the edge texture details while reconstructing image from its frequency domain. It is used to reduce the problems like blocking, ringing artifacts occurs because of DCT and DWT. The low frequency sub-band coefficients are fused by selecting coefficient having maximum spatial frequency. It indicates the overall active level of an image. The high frequency sub-band coefficients are fused by selecting coefficients having maximum LDP code value LDP computes the edge response values in all eight directions at each pixel position and generates a code from the relative strength magnitude. Finally, fused two different frequency sub-bands are inverse transformed to reconstruct fused image. The system performance will be evaluated by using the parameters such as Peak signal to noise ratio, correlation and entrop

    On Generative Adversarial Network Based Synthetic Iris Presentation Attack And Its Detection

    Get PDF
    Human iris is considered a reliable and accurate modality for biometric recognition due to its unique texture information. Reliability and accuracy of iris biometric modality have prompted its large-scale deployment for critical applications such as border control and national identification projects. The extensive growth of iris recognition systems has raised apprehensions about the susceptibility of these systems to various presentation attacks. In this thesis, a novel iris presentation attack using deep learning based synthetically generated iris images is presented. Utilizing the generative capability of deep convolutional generative adversarial networks and iris quality metrics, a new framework, named as iDCGAN is proposed for creating realistic appearing synthetic iris images. In-depth analysis is performed using quality score distributions of real and synthetically generated iris images to understand the effectiveness of the proposed approach. We also demonstrate that synthetically generated iris images can be used to attack existing iris recognition systems. As synthetically generated iris images can be effectively deployed in iris presentation attacks, it is important to develop accurate iris presentation attack detection algorithms which can distinguish such synthetic iris images from real iris images. For this purpose, a novel structural and textural feature-based iris presentation attack detection framework (DESIST) is proposed. The key emphasis of DESIST is on developing a unified framework for detecting a medley of iris presentation attacks, including synthetic iris. Experimental evaluations showcase the efficacy of the proposed DESIST framework in detecting synthetic iris presentation attacks

    Datasets, Clues and State-of-the-Arts for Multimedia Forensics: An Extensive Review

    Full text link
    With the large chunks of social media data being created daily and the parallel rise of realistic multimedia tampering methods, detecting and localising tampering in images and videos has become essential. This survey focusses on approaches for tampering detection in multimedia data using deep learning models. Specifically, it presents a detailed analysis of benchmark datasets for malicious manipulation detection that are publicly available. It also offers a comprehensive list of tampering clues and commonly used deep learning architectures. Next, it discusses the current state-of-the-art tampering detection methods, categorizing them into meaningful types such as deepfake detection methods, splice tampering detection methods, copy-move tampering detection methods, etc. and discussing their strengths and weaknesses. Top results achieved on benchmark datasets, comparison of deep learning approaches against traditional methods and critical insights from the recent tampering detection methods are also discussed. Lastly, the research gaps, future direction and conclusion are discussed to provide an in-depth understanding of the tampering detection research arena
    • …
    corecore