307 research outputs found

    Multi-exposure microscopic image fusion-based detail enhancement algorithm

    Get PDF
    [EN] Traditional microscope imaging techniques are unable to retrieve the complete dynamic range of a diatom species with complex silica-based cell walls and multi-scale patterns. In order to extract details from the diatom, multi-exposure images are captured at variable exposure settings using microscopy techniques. A recent innovation shows that image fusion overcomes the limitations of standard digital cameras to capture details from high dynamic range scene or specimen photographed using microscopy imaging techniques. In this paper, we present a cell-region sensitive exposure fusion (CS-EF) approach to produce well-exposed fused images that can be presented directly on conventional display devices. The ambition is to preserve details in poorly and brightly illuminated regions of 3-D transparent diatom shells. The aforesaid objective is achieved by taking into account local information measures, which select well-exposed regions across input exposures. In addition, a modified histogram equalization is introduced to improve uniformity of input multi-exposure image prior to fusion. Quantitative and qualitative assessment of proposed fusion results reveal better performance than several state-of-the-art algorithms that substantiate the method’s validitySIThis work was supported in part by the Spanish Government, Spain under the AQUALITAS-retos project (Ref.CTM2014-51907-C2-2-R-MINECO) and by Junta de Comunidades de Castilla-La Mancha, Spain under project HIPERDEEP (Ref. SBPLY/19/180501/000273). The funding agencies had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscrip

    Holistic Dynamic Frequency Transformer for Image Fusion and Exposure Correction

    Full text link
    The correction of exposure-related issues is a pivotal component in enhancing the quality of images, offering substantial implications for various computer vision tasks. Historically, most methodologies have predominantly utilized spatial domain recovery, offering limited consideration to the potentialities of the frequency domain. Additionally, there has been a lack of a unified perspective towards low-light enhancement, exposure correction, and multi-exposure fusion, complicating and impeding the optimization of image processing. In response to these challenges, this paper proposes a novel methodology that leverages the frequency domain to improve and unify the handling of exposure correction tasks. Our method introduces Holistic Frequency Attention and Dynamic Frequency Feed-Forward Network, which replace conventional correlation computation in the spatial-domain. They form a foundational building block that facilitates a U-shaped Holistic Dynamic Frequency Transformer as a filter to extract global information and dynamically select important frequency bands for image restoration. Complementing this, we employ a Laplacian pyramid to decompose images into distinct frequency bands, followed by multiple restorers, each tuned to recover specific frequency-band information. The pyramid fusion allows a more detailed and nuanced image restoration process. Ultimately, our structure unifies the three tasks of low-light enhancement, exposure correction, and multi-exposure fusion, enabling comprehensive treatment of all classical exposure errors. Benchmarking on mainstream datasets for these tasks, our proposed method achieves state-of-the-art results, paving the way for more sophisticated and unified solutions in exposure correction

    Enhancement of Single and Composite Images Based on Contourlet Transform Approach

    Get PDF
    Image enhancement is an imperative step in almost every image processing algorithms. Numerous image enhancement algorithms have been developed for gray scale images despite their absence in many applications lately. This thesis proposes hew image enhancement techniques of 8-bit single and composite digital color images. Recently, it has become evident that wavelet transforms are not necessarily best suited for images. Therefore, the enhancement approaches are based on a new 'true' two-dimensional transform called contourlet transform. The proposed enhancement techniques discussed in this thesis are developed based on the understanding of the working mechanisms of the new multiresolution property of contourlet transform. This research also investigates the effects of using different color space representations for color image enhancement applications. Based on this investigation an optimal color space is selected for both single image and composite image enhancement approaches. The objective evaluation steps show that the new method of enhancement not only superior to the commonly used transformation method (e.g. wavelet transform) but also to various spatial models (e.g. histogram equalizations). The results found are encouraging and the enhancement algorithms have proved to be more robust and reliable

    Comparative study of Image Fusion Methods: A Review

    Full text link
    As the size and cost of sensors decrease, sensor networks are increasingly becoming an attractive method to collect information in a given area. However, one single sensor is not capable of providing all the required information,either because of their design or because of observational constraints. One possible solution to get all the required information about a particular scene or subject is data fusion.. A small number of metrics proposed so far provide only a rough, numerical estimate of fusion performance with limited understanding of the relative merits of different fusion schemes. This paper proposes a method for comprehensive, objective, image fusion performance characterization using a fusion evaluation framework based on gradient information representation. We give the framework of the overallnbsp system and explain its USAge method. The system has many functions: image denoising, image enhancement, image registration, image segmentation, image fusion, and fusion evaluation. This paper presents a literature review on some of the image fusion techniques for image fusion like, Laplace transform, Discrete Wavelet transform based fusion, Principal component analysis (PCA) based fusion etc. Comparison of all the techniques can be the better approach fornbsp future research

    Pixel-level Image Fusion Algorithms for Multi-camera Imaging System

    Get PDF
    This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation abilit

    Multimodal enhancement-fusion technique for natural images.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.This dissertation presents a multimodal enhancement-fusion (MEF) technique for natural images. The MEF is expected to contribute value to machine vision applications and personal image collections for the human user. Image enhancement techniques and the metrics that are used to assess their performance are prolific, and each is usually optimised for a specific objective. The MEF proposes a framework that adaptively fuses multiple enhancement objectives into a seamless pipeline. Given a segmented input image and a set of enhancement methods, the MEF applies all the enhancers to the image in parallel. The most appropriate enhancement in each image segment is identified, and finally, the differentially enhanced segments are seamlessly fused. To begin with, this dissertation studies targeted contrast enhancement methods and performance metrics that can be utilised in the proposed MEF. It addresses a selection of objective assessment metrics for contrast-enhanced images and determines their relationship with the subjective assessment of human visual systems. This is to identify which objective metrics best approximate human assessment and may therefore be used as an effective replacement for tedious human assessment surveys. A subsequent human visual assessment survey is conducted on the same dataset to ascertain image quality as perceived by a human observer. The interrelated concepts of naturalness and detail were found to be key motivators of human visual assessment. Findings show that when assessing the quality or accuracy of these methods, no single quantitative metric correlates well with human perception of naturalness and detail, however, a combination of two or more metrics may be used to approximate the complex human visual response. Thereafter, this dissertation proposes the multimodal enhancer that adaptively selects the optimal enhancer for each image segment. MEF focusses on improving chromatic irregularities such as poor contrast distribution. It deploys a concurrent enhancement pathway that subjects an image to multiple image enhancers in parallel, followed by a fusion algorithm that creates a composite image that combines the strengths of each enhancement path. The study develops a framework for parallel image enhancement, followed by parallel image assessment and selection, leading to final merging of selected regions from the enhanced set. The output combines desirable attributes from each enhancement pathway to produce a result that is superior to each path taken alone. The study showed that the proposed MEF technique performs well for most image types. MEF is subjectively favourable to a human panel and achieves better performance for objective image quality assessment compared to other enhancement methods

    Single underwater image enhancement based on adaptive correction of channel differential and fusion

    Get PDF
    Clear underwater images are necessary in many underwater applications, while absorption, scattering, and different water conditions will lead to blurring and different color deviations. In order to overcome the limitations of the available color correction and deblurring algorithms, this paper proposed a fusion-based image enhancement method for various water areas. We proposed two novel image processing methods, namely, an adaptive channel deblurring method and a color correction method, by limiting the histogram mapping interval. Subsequently, using these two methods, we took two images from a single underwater image as inputs of the fusion framework. Finally, we obtained a satisfactory underwater image. To validate the effectiveness of the experiment, we tested our method using public datasets. The results showed that the proposed method can adaptively correct color casts and significantly enhance the details and quality of attenuated underwater images
    • …
    corecore