886 research outputs found

    Põhjalik uuring ülisuure dünaamilise ulatusega piltide toonivastendamisest koos subjektiivsete testidega

    Get PDF
    A high dynamic range (HDR) image has a very wide range of luminance levels that traditional low dynamic range (LDR) displays cannot visualize. For this reason, HDR images are usually transformed to 8-bit representations, so that the alpha channel for each pixel is used as an exponent value, sometimes referred to as exponential notation [43]. Tone mapping operators (TMOs) are used to transform high dynamic range to low dynamic range domain by compressing pixels so that traditional LDR display can visualize them. The purpose of this thesis is to identify and analyse differences and similarities between the wide range of tone mapping operators that are available in the literature. Each TMO has been analyzed using subjective studies considering different conditions, which include environment, luminance, and colour. Also, several inverse tone mapping operators, HDR mappings with exposure fusion, histogram adjustment, and retinex have been analysed in this study. 19 different TMOs have been examined using a variety of HDR images. Mean opinion score (MOS) is calculated on those selected TMOs by asking the opinion of 25 independent people considering candidates’ age, vision, and colour blindness

    Super resolution and dynamic range enhancement of image sequences

    Get PDF
    Camera producers try to increase the spatial resolution of a camera by reducing size of sites on sensor array. However, shot noise causes the signal to noise ratio drop as sensor sites get smaller. This fact motivates resolution enhancement to be performed through software. Super resolution (SR) image reconstruction aims to combine degraded images of a scene in order to form an image which has higher resolution than all observations. There is a demand for high resolution images in biomedical imaging, surveillance, aerial/satellite imaging and high-definition TV (HDTV) technology. Although extensive research has been conducted in SR, attention has not been given to increase the resolution of images under illumination changes. In this study, a unique framework is proposed to increase the spatial resolution and dynamic range of a video sequence using Bayesian and Projection onto Convex Sets (POCS) methods. Incorporating camera response function estimation into image reconstruction allows dynamic range enhancement along with spatial resolution improvement. Photometrically varying input images complicate process of projecting observations onto common grid by violating brightness constancy. A contrast invariant feature transform is proposed in this thesis to register input images with high illumination variation. Proposed algorithm increases the repeatability rate of detected features among frames of a video. Repeatability rate is increased by computing the autocorrelation matrix using the gradients of contrast stretched input images. Presented contrast invariant feature detection improves repeatability rate of Harris corner detector around %25 on average. Joint multi-frame demosaicking and resolution enhancement is also investigated in this thesis. Color constancy constraint set is devised and incorporated into POCS framework for increasing resolution of color-filter array sampled images. Proposed method provides fewer demosaicking artifacts compared to existing POCS method and a higher visual quality in final image

    Adaptive multi-scale retinex algorithm for contrast enhancement of real world scenes

    Get PDF
    Contrast enhancement is a classic image restoration technique that traditionally has been performed using forms of histogram equalization. While effective these techniques often introduce unrealistic tonal rendition in real-world scenes. This paper explores the use of Retinex theory to perform contrast enhancement of real-world scenes. We propose an improvement to the Multi-Scale Retinex algorithm which enhances its ability to perform dynamic range compression while not introducing halo artifacts and greying. The algorithm is well suited to be implemented on the GPU and by doing so real-time processing speeds are achieved

    Inverse tone mapping

    Get PDF
    The introduction of High Dynamic Range Imaging in computer graphics has produced a novelty in Imaging that can be compared to the introduction of colour photography or even more. Light can now be captured, stored, processed, and finally visualised without losing information. Moreover, new applications that can exploit physical values of the light have been introduced such as re-lighting of synthetic/real objects, or enhanced visualisation of scenes. However, these new processing and visualisation techniques cannot be applied to movies and pictures that have been produced by photography and cinematography in more than one hundred years. This thesis introduces a general framework for expanding legacy content into High Dynamic Range content. The expansion is achieved avoiding artefacts, producing images suitable for visualisation and re-lighting of synthetic/real objects. Moreover, it is presented a methodology based on psychophysical experiments and computational metrics to measure performances of expansion algorithms. Finally, a compression scheme, inspired by the framework, for High Dynamic Range Textures, is proposed and evaluated

    Exposure Fusion Using Boosting Laplacian Pyramid

    Get PDF
    Abstract-This paper proposes a new exposure fusion approach for producing a high quality image result from multiple exposure images. Based on the local weight and global weight by considering the exposure quality measurement between different exposure images, and the just noticeable distortion-based saliency weight, a novel hybrid exposure weight measurement is developed. This new hybrid weight is guided not only by a single image's exposure level but also by the relative exposure level between different exposure images. The core of the approach is our novel boosting Laplacian pyramid, which is based on the structure of boosting the detail and base signal, respectively, and the boosting process is guided by the proposed exposure weight. Our approach can effectively blend the multiple exposure images for static scenes while preserving both color appearance and texture structure. Our experimental results demonstrate that the proposed approach successfully produces visually pleasing exposure fusion images with better color appearance and more texture details than the existing exposure fusion techniques and tone mapping operators. Index Terms-Boosting Laplacian pyramid, exposure fusion, global and local exposure weight, gradient vector

    Multi-exposure microscopic image fusion-based detail enhancement algorithm

    Get PDF
    [EN] Traditional microscope imaging techniques are unable to retrieve the complete dynamic range of a diatom species with complex silica-based cell walls and multi-scale patterns. In order to extract details from the diatom, multi-exposure images are captured at variable exposure settings using microscopy techniques. A recent innovation shows that image fusion overcomes the limitations of standard digital cameras to capture details from high dynamic range scene or specimen photographed using microscopy imaging techniques. In this paper, we present a cell-region sensitive exposure fusion (CS-EF) approach to produce well-exposed fused images that can be presented directly on conventional display devices. The ambition is to preserve details in poorly and brightly illuminated regions of 3-D transparent diatom shells. The aforesaid objective is achieved by taking into account local information measures, which select well-exposed regions across input exposures. In addition, a modified histogram equalization is introduced to improve uniformity of input multi-exposure image prior to fusion. Quantitative and qualitative assessment of proposed fusion results reveal better performance than several state-of-the-art algorithms that substantiate the method’s validitySIThis work was supported in part by the Spanish Government, Spain under the AQUALITAS-retos project (Ref.CTM2014-51907-C2-2-R-MINECO) and by Junta de Comunidades de Castilla-La Mancha, Spain under project HIPERDEEP (Ref. SBPLY/19/180501/000273). The funding agencies had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscrip

    FluoRender: An application of 2D image space methods for 3D and 4D confocal microscopy data visualization in neurobiology research

    Get PDF
    Journal Article2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists' demands for qualitative analysis of confocal microscopy data
    corecore