973 research outputs found

    Evaluation of tone-mapping algorithms for focal-plane implementation

    Get PDF
    Scenes in the real world may simultaneously contain very bright and very dark regions, caused by different illumination conditions. These scenes contain a wide range of different light intensity values. Attempting to exhibit a picture of such scene on a conventional display device, such as a computer monitor, leads to (a possibly large) loss of details in the displayed scene, since conventional display devices can only represent a limited amount of different light intensity values, which span a smaller range. To mitigate the loss of details, before it is shown on the display device, the picture of the scene must be processed by a tone-mapping algorithm, which maps the original light intensities into the light intensities representable by the display, thereby accommodating the input high dynamic range of values into a smaller range. In this work, a comparison between different tone-mapping algorithms is presented. More specifically, the performances (regarding processing time and overall quality of the processed image) from a digital version of the tone-mapping operator originally proposed by Fern´andez-Berni et al. [11] that is implemented in the focal plane of the camera and from different tone-mapping operators that are originally implemented in software are compared. Furthermore, a second digital version of the focal-plane operator, which simulates a modified version of the original hardware implementation, is considered and its performance is analyzed. The modified hardware implementation is less complex and requires less space than the original implementation and, subjectively, keeps the overall image quality approximately equal to that achieved by digital operators. Issues regarding colors of the tone-mapped images are also addressed, especially the required processing that must be performed by the focal-plane operator after the tone mapping, in order to yield images without color distortions.Cenas no mundo real podem conter uma ampla faixa de valores de diferentes intensidades luminosas. Mostrar a cena original em um aparelho de exibição convencional, tal como um monitor de computador, leva a uma (possivelmente grande) perda de detalhes na cena exibida, uma vez que esses aparelhos são capazes de representar somente uma quantidade limitada de diferentes intensidades luminosas, as quais ocupam uma faixa de valores menor. Para diminuir a perda de detalhes, antes de ser exibida em tais aparelhos, a cena deve ser processada por um algoritmo de tone mapping, o qual mapeia os valores originais de intensidade luminosa em valores que são representáveis pelo aparelho de exibição, acomodando, com isso, a alta faixa dinâmica dos valores de entrada em uma faixa de valores menor. Neste trabalho, uma comparação entre diferentes algoritmos de tone-mapping é apresentada. Mais especificamente, são comparados entre si os desempenhos (referentes a tempos de execução e qualidade geral da imagem processada) da versão digital do operador de tone mapping originalmente proposto por Fernández-Berni et al. [11] que ´e implementado no plano focal da câmera e de diferentes operadores de tone mapping que são originalmente implementados em software. Além disso, uma segunda versão digital do operador no plano focal, a qual simula uma versão modificada da implementação original em hardware, é considerada e seu desempenho é analisado. Essa versão modificada requer um hardware que é menos complexo e ocupa menos espaço que o hardware da implementação original, além de, subjetivamente, manter a qualidade geral da imagem próxima daquela alcançada por operadores digitais. Questões referentes às cores das imagens processadas também são tratadas, especialmente os processamentos que são requeridos pelo operador do plano focal após o tone mapping, de modo a gerar imagens sem distorções de cor

    High Dynamic Range Visual Content Compression

    Get PDF
    This thesis addresses the research questions of High Dynamic Range (HDR) visual contents compression. The HDR representations are intended to represent the actual physical value of the light rather than exposed value. The current HDR compression schemes are the extension of legacy Low Dynamic Range (LDR) compressions, by using Tone-Mapping Operators (TMO) to reduce the dynamic range of the HDR contents. However, introducing TMO increases the overall computational complexity, and it causes the temporal artifacts. Furthermore, these compression schemes fail to compress non-salient region differently than the salient region, when Human Visual System (HVS) perceives them differently. The main contribution of this thesis is to propose a novel Mapping-free visual saliency-guided HDR content compression scheme. Firstly, the relationship of Discrete Wavelet Transform (DWT) lifting steps and TMO are explored. A novel approach to compress HDR image by Joint Photographic Experts Group (JPEG) 2000 codec while backward compatible to LDR is proposed. This approach exploits the reversibility of tone mapping and scalability of DWT. Secondly, the importance of the TMO in the HDR compression is evaluated in this thesis. A mapping-free post HDR image compression based on JPEG and JPEG2000 standard codecs for current HDR image formats is proposed. This approach exploits the structure of HDR formats. It has an equivalent compression performance and the lowest computational complexity compared to the existing HDR lossy compressions (50% lower than the state-of-the-art). Finally, the shortcomings of the current HDR visual saliency models, and HDR visual saliency-guided compression are explored in this thesis. A spatial saliency model for HDR visual content outperform others by 10% for spatial visual prediction task with 70% lower computational complexity is proposed. Furthermore, the experiment suggested more than 90% temporal saliency is predicted by the proposed spatial model. Moreover, the proposed saliency model can be used to guide the HDR compression by applying different quantization factor according to the intensity of predicted saliency map

    On Using and Improving Gradient Domain Processing for Image Enhancement

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Efficient and effective objective image quality assessment metrics

    Get PDF
    Acquisition, transmission, and storage of images and videos have been largely increased in recent years. At the same time, there has been an increasing demand for high quality images and videos to provide satisfactory quality-of-experience for viewers. In this respect, high dynamic range (HDR) imaging with higher than 8-bit depth has been an interesting approach in order to capture more realistic images and videos. Objective image and video quality assessment plays a significant role in monitoring and enhancing the image and video quality in several applications such as image acquisition, image compression, multimedia streaming, image restoration, image enhancement and displaying. The main contributions of this work are to propose efficient features and similarity maps that can be used to design perceptually consistent image quality assessment tools. In this thesis, perceptually consistent full-reference image quality assessment (FR-IQA) metrics are proposed to assess the quality of natural, synthetic, photo-retouched and tone-mapped images. In addition, efficient no-reference image quality metrics are proposed to assess JPEG compressed and contrast distorted images. Finally, we propose a perceptually consistent color to gray conversion method, perform a subjective rating and evaluate existing color to gray assessment metrics. Existing FR-IQA metrics may have the following limitations. First, their performance is not consistent for different distortions and datasets. Second, better performing metrics usually have high complexity. We propose in this thesis an efficient and reliable full-reference image quality evaluator based on new gradient and color similarities. We derive a general deviation pooling formulation and use it to compute a final quality score from the similarity maps. Extensive experimental results verify high accuracy and consistent performance of the proposed metric on natural, synthetic and photo retouched datasets as well as its low complexity. In order to visualize HDR images on standard low dynamic range (LDR) displays, tone-mapping operators are used in order to convert HDR into LDR. Given different depth bits of HDR and LDR, traditional FR-IQA metrics are not able to assess the quality of tone-mapped images. The existing full-reference metric for tone-mapped images called TMQI converts both HDR and LDR to an intermediate color space and measure their similarity in the spatial domain. We propose in this thesis a feature similarity full-reference metric in which local phase of HDR is compared with the local phase of LDR. Phase is an important information of images and previous studies have shown that human visual system responds strongly to points in an image where the phase information is ordered. Experimental results on two available datasets show the very promising performance of the proposed metric. No-reference image quality assessment (NR-IQA) metrics are of high interest because in the most present and emerging practical real-world applications, the reference signals are not available. In this thesis, we propose two perceptually consistent distortion-specific NR-IQA metrics for JPEG compressed and contrast distorted images. Based on edge statistics of JPEG compressed images, an efficient NR-IQA metric for blockiness artifact is proposed which is robust to block size and misalignment. Then, we consider the quality assessment of contrast distorted images which is a common distortion. Higher orders of Minkowski distance and power transformation are used to train a low complexity model that is able to assess contrast distortion with high accuracy. For the first time, the proposed model is used to classify the type of contrast distortions which is very useful additional information for image contrast enhancement. Unlike its traditional use in the assessment of distortions, objective IQA can be used in other applications. Examples are the quality assessment of image fusion, color to gray image conversion, inpainting, background subtraction, etc. In the last part of this thesis, a real-time and perceptually consistent color to gray image conversion methodology is proposed. The proposed correlation-based method and state-of-the-art methods are compared by subjective and objective evaluation. Then, a conclusion is made on the choice of the objective quality assessment metric for the color to gray image conversion. The conducted subjective ratings can be used in the development process of quality assessment metrics for the color to gray image conversion and to test their performance

    Psychophysiology-based QoE assessment : a survey

    Get PDF
    We present a survey of psychophysiology-based assessment for quality of experience (QoE) in advanced multimedia technologies. We provide a classification of methods relevant to QoE and describe related psychological processes, experimental design considerations, and signal analysis techniques. We summarize multimodal techniques and discuss several important aspects of psychophysiology-based QoE assessment, including the synergies with psychophysical assessment and the need for standardized experimental design. This survey is not considered to be exhaustive but serves as a guideline for those interested to further explore this emerging field of research

    A Global Human Settlement Layer from optical high resolution imagery - Concept and first results

    Get PDF
    A general framework for processing of high and very-high resolution imagery for creating a Global Human Settlement Layer (GHSL) is presented together with a discussion on the results of the first operational test of the production workflow. The test involved the mapping of 24.3 millions of square kilometres of the Earth surface spread over four continents, corresponding to an estimated population of 1.3 billion of people in 2010. The resolution of the input image data ranges from 0.5 to 10 meters, collected by a heterogeneous set of platforms including satellite SPOT (2 and 5), CBERS-2B, RapidEye (2 and 4), WorldView (1 and 2), GeoEye-1, QuickBird-2, Ikonos-2, and airborne sensors. Several imaging modes were tested including panchromatic, multispectral and pan-sharpened images. A new fully automatic image information extraction, generalization and mosaic workflow is presented that is based on multiscale textural and morphological image features extraction. New image feature compression and optimization are introduced, together with new learning and classification techniques allowing for the processing of HR/VHR image data using low-resolution thematic layers as reference. A new systematic approach for quality control and validation allowing global spatial and thematic consistency checking is proposed and applied. The quality of the results are discussed by sensor, by band, by resolution, and eco-regions. Critical points, lessons learned and next steps are highlighted.JRC.G.2-Global security and crisis managemen

    An underwater image quality assessment metric

    Get PDF
    Various image enhancement algorithms are adopted to improve underwater images that often suffer from visual distortions. It is critical to assess the output quality of underwater images undergoing enhancement algorithms, and use the results to optimise underwater imaging systems. In our previous study, we created a benchmark for quality assessment of underwater image enhancement via subjective experiments. Building on the benchmark, this paper proposes a new objective metric that can automatically assess the output quality of image enhancement, namely UWEQM. By characterising specific underwater physics and relevant properties of the human visual system, image quality attributes are computed and combined to yield an overall metric. Experimental results show that the proposed UWEQM metric yields good performance in predicting image quality as perceived by human subjects
    corecore