196 research outputs found

    Põhjalik uuring ülisuure dünaamilise ulatusega piltide toonivastendamisest koos subjektiivsete testidega

    Get PDF
    A high dynamic range (HDR) image has a very wide range of luminance levels that traditional low dynamic range (LDR) displays cannot visualize. For this reason, HDR images are usually transformed to 8-bit representations, so that the alpha channel for each pixel is used as an exponent value, sometimes referred to as exponential notation [43]. Tone mapping operators (TMOs) are used to transform high dynamic range to low dynamic range domain by compressing pixels so that traditional LDR display can visualize them. The purpose of this thesis is to identify and analyse differences and similarities between the wide range of tone mapping operators that are available in the literature. Each TMO has been analyzed using subjective studies considering different conditions, which include environment, luminance, and colour. Also, several inverse tone mapping operators, HDR mappings with exposure fusion, histogram adjustment, and retinex have been analysed in this study. 19 different TMOs have been examined using a variety of HDR images. Mean opinion score (MOS) is calculated on those selected TMOs by asking the opinion of 25 independent people considering candidates’ age, vision, and colour blindness

    AN INTEGER TONE MAPPING OPERATION FOR HDR IMAGES IN OPENEXR WITH DENORMALIZED NUMBERS

    Get PDF
    ABSTRACT We propose an integer tone mapping operator (TMO) for high dynamic range (HDR) images expressed in a floating-point data format. Two purposes are achieved by the proposed TMO. The first purpose is to implement a TMO with less memory space. The second purpose is to give an important step to realize a fixed-point TMO. The proposed TMO is available for HDR images in the OpenEXR format. The OpenEXR format has two numerical representations (the normalized number and the denormalized number) which are not in other HDR formats such as RGBE. These two numerical representations cause a problem in applying an integer TMO. The proposed method enables us to avoid the problem by using the intermediate format. Moreover, the exponent part and the mantissa part are processed separately as two integer numbers. As a result, an integer TMO with less numerical range is achieved by our method. The experimental results show that the proposed method can generate high-quality low dynamic range (LDR) images with less memory space. Index Terms-high dynamic range, tone mapping, OpenEXR, denormalized number, integer operatio

    A simplified HDR image processing pipeline for digital photography

    Get PDF
    High Dynamic Range (HDR) imaging has revolutionized the digital imaging. It allows capture, storage, manipulation, and display of full dynamic range of the captured scene. As a result, it has spawned whole new possibilities for digital photography, from photorealistic to hyper-real. With all these advantages, the technique is expected to replace the conventional 8-bit Low Dynamic Range (LDR) imaging in the future. However, HDR results in an even more complex imaging pipeline including new techniques for capturing, encoding, and displaying images. The goal of this thesis is to bridge the gap between conventional imaging pipeline to the HDR’s in as simple a way as possible. We make three contributions. First we show that a simple extension of gamma encoding suffices as a representation to store HDR images. Second, gamma as a control for image contrast can be ‘optimally’ tuned on a per image basis. Lastly, we show a general tone curve, with detail preservation, suffices to tone map an image (there is only a limited need for the expensive spatially varying tone mappers). All three of our contributions are evaluated psychophysically. Together they support our general thesis that an HDR workflow, similar to that already used in photography, might be used. This said, we believe the adoption of HDR into photography is, perhaps, less difficult than it is sometimes posed to be

    Algorithms for compression of high dynamic range images and video

    Get PDF
    The recent advances in sensor and display technologies have brought upon the High Dynamic Range (HDR) imaging capability. The modern multiple exposure HDR sensors can achieve the dynamic range of 100-120 dB and LED and OLED display devices have contrast ratios of 10^5:1 to 10^6:1. Despite the above advances in technology the image/video compression algorithms and associated hardware are yet based on Standard Dynamic Range (SDR) technology, i.e. they operate within an effective dynamic range of up to 70 dB for 8 bit gamma corrected images. Further the existing infrastructure for content distribution is also designed for SDR, which creates interoperability problems with true HDR capture and display equipment. The current solutions for the above problem include tone mapping the HDR content to fit SDR. However this approach leads to image quality associated problems, when strong dynamic range compression is applied. Even though some HDR-only solutions have been proposed in literature, they are not interoperable with current SDR infrastructure and are thus typically used in closed systems. Given the above observations a research gap was identified in the need for efficient algorithms for the compression of still images and video, which are capable of storing full dynamic range and colour gamut of HDR images and at the same time backward compatible with existing SDR infrastructure. To improve the usability of SDR content it is vital that any such algorithms should accommodate different tone mapping operators, including those that are spatially non-uniform. In the course of the research presented in this thesis a novel two layer CODEC architecture is introduced for both HDR image and video coding. Further a universal and computationally efficient approximation of the tone mapping operator is developed and presented. It is shown that the use of perceptually uniform colourspaces for internal representation of pixel data enables improved compression efficiency of the algorithms. Further proposed novel approaches to the compression of metadata for the tone mapping operator is shown to improve compression performance for low bitrate video content. Multiple compression algorithms are designed, implemented and compared and quality-complexity trade-offs are identified. Finally practical aspects of implementing the developed algorithms are explored by automating the design space exploration flow and integrating the high level systems design framework with domain specific tools for synthesis and simulation of multiprocessor systems. The directions for further work are also presented

    JPEG XT: A Compression Standard for HDR and WCG Images [Standards in a Nutshell]

    Get PDF
    High bit depth data acquisition and manipulation have been largely studied at the academic level over the last 15 years and are rapidly attracting interest at the industrial level. An example of the increasing interest for high-dynamic range (HDR) imaging is the use of 32-bit floating point data for video and image acquisition and manipulation that allows a variety of visual effects that closely mimic the real-world visual experience of the end user [1] (see Figure 1). At the industrial level, we are witnessing increasing traction toward supporting HDR and wide color gamut (WCG). WCG leverages HDR for each color channel to display a wider range of colors. Consumer cameras are currently available with a 14- or 16-bit analog-to-digital converter. Rendering devices are also appearing with the capability to display HDR images and video with a peak brightness of up to 4,000 nits and to support WCG (ITU-R Rec. BT.2020 [2]) rather than the historical ITU-R Rec. BT.709 [3]. This trend calls for a widely accepted standard for higher bit depth support that can be seamlessly integrated into existing products and applications. While standard formats such as the Joint Photographic Experts Group (JPEG) 2000 [5] and JPEG XR [6] offer support for high bit depth image representations, their adoption requires a nonnegligible investment that may not always be affordable in existing imaging ecosystems, and induces a difficult transition, as they are not backward-compatible with the popular JPEG image format

    Propuesta de arquitectura y circuitos para la mejora del rango dinámico de sistemas de visión en un chip diseñados en tecnologías CMOS profundamente submicrométrica

    Get PDF
    El trabajo presentado en esta tesis trata de proponer nuevas técnicas para la expansión del rango dinámico en sensores electrónicos de imagen. En este caso, hemos dirigido nuestros estudios hacia la posibilidad de proveer dicha funcionalidad en un solo chip. Esto es, sin necesitar ningún soporte externo de hardware o software, formando un tipo de sistema denominado Sistema de Visión en un Chip (VSoC). El rango dinámico de los sensores electrónicos de imagen se define como el cociente entre la máxima y la mínima iluminación medible. Para mejorar este factor surgen dos opciones. La primera, reducir la mínima luz medible mediante la disminución del ruido en el sensor de imagen. La segunda, incrementar la máxima luz medible mediante la extensión del límite de saturación del sensor. Cronológicamente, nuestra primera opción para mejorar el rango dinámico se basó en reducir el ruido. Varias opciones se pueden tomar para mejorar la figura de mérito de ruido del sistema: reducir el ruido usando una tecnología CIS o usar circuitos dedicados, tales como calibración o auto cero. Sin embargo, el uso de técnicas de circuitos implica limitaciones, las cuales sólo pueden ser resueltas mediante el uso de tecnologías no estándar que están especialmente diseñadas para este propósito. La tecnología CIS utilizada está dirigida a la mejora de la calidad y las posibilidades del proceso de fotosensado, tales como sensibilidad, ruido, permitir imagen a color, etcétera. Para estudiar las características de la tecnología en más detalle, se diseñó un chip de test, lo cual permite extraer las mejores opciones para futuros píxeles. No obstante, a pesar de un satisfactorio comportamiento general, las medidas referentes al rango dinámico indicaron que la mejora de este mediante sólo tecnología CIS es muy limitada. Es decir, la mejora de la corriente oscura del sensor no es suficiente para nuestro propósito. Para una mayor mejora del rango dinámico se deben incluir circuitos dentro del píxel. No obstante, las tecnologías CIS usualmente no permiten nada más que transistores NMOS al lado del fotosensor, lo cual implica una seria restricción en el circuito a usar. Como resultado, el diseño de un sensor de imagen con mejora del rango dinámico en tecnologías CIS fue desestimado en favor del uso de una tecnología estándar, la cual da más flexibilidad al diseño del píxel. En tecnologías estándar, es posible introducir una alta funcionalidad usando circuitos dentro del píxel, lo cual permite técnicas avanzadas para extender el límite de saturación de los sensores de imagen. Para este objetivo surgen dos opciones: adquisición lineal o compresiva. Si se realiza una adquisición lineal, se generarán una gran cantidad de datos por cada píxel. Como ejemplo, si el rango dinámico de la escena es de 120dB al menos se necesitarían 20-bits/píxel, log2(10120/20)=19.93, para la representación binaria de este rango dinámico. Esto necesitaría de amplios recursos para procesar esta gran cantidad de datos, y un gran ancho de banda para moverlos al circuito de procesamiento. Para evitar estos problemas, los sensores de imagen de alto rango dinámico usualmente optan por utilizar una adquisición compresiva de la luz. Por lo tanto, esto implica dos tareas a realizar: la captura y la compresión de la imagen. La captura de la imagen se realiza a nivel de píxel, en el dispositivo fotosensor, mientras que la compresión de la imagen puede ser realizada a nivel de píxel, de sistema, o mediante postprocesado externo. Usando el postprocesado, existe un campo de investigación que estudia la compresión de escenas de alto rango dinámico mientras se mantienen los detalles, produciendo un resultado apropiado para la percepción humana en monitores convencionales de bajo rango dinámico. Esto se denomina Mapeo de Tonos (Tone Mapping) y usualmente emplea solo 8-bits/píxel para las representaciones de imágenes, ya que éste es el estándar para las imágenes de bajo rango dinámico. Los píxeles de adquisición compresiva, por su parte, realizan una compresión que no es dependiente de la escena de alto rango dinámico a capturar, lo cual implica una baja compresión o pérdida de detalles y contraste. Para evitar estas desventajas, en este trabajo, se presenta un píxel de adquisición compresiva que aplica una técnica de mapeo de tonos que permite la captura de imágenes ya comprimidas de una forma optimizada para mantener los detalles y el contraste, produciendo una cantidad muy reducida de datos. Las técnicas de mapeo de tonos ejecutan normalmente postprocesamiento mediante software en un ordenador sobre imágenes capturadas sin compresión, las cuales contienen una gran cantidad de datos. Estas técnicas han pertenecido tradicionalmente al campo de los gráficos por ordenador debido a la gran cantidad de esfuerzo computacional que requieren. Sin embargo, hemos desarrollado un nuevo algoritmo de mapeo de tonos especialmente adaptado para aprovechar los circuitos dentro del píxel y que requiere un reducido esfuerzo de computación fuera de la matriz de píxeles, lo cual permite el desarrollo de un sistema de visión en un solo chip. El nuevo algoritmo de mapeo de tonos, el cual es un concepto matemático que puede ser simulado mediante software, se ha implementado también en un chip. Sin embargo, para esta implementación hardware en un chip son necesarias algunas adaptaciones y técnicas avanzadas de diseño, que constituyen en sí mismas otra de las contribuciones de este trabajo. Más aún, debido a la nueva funcionalidad, se han desarrollado modificaciones de los típicos métodos a usar para la caracterización y captura de imágenes

    Multi-Camera Platform for Panoramic Real-Time HDR Video Construction and Rendering

    Get PDF
    High dynamic range (HDR) images are usually obtained by capturing several images of the scene at different exposures. Previous HDR video techniques adopted the same principle by stacking HDR frames in time domain. We designed a new multi-camera platform which is able to construct and render HDR panoramic video in real-time, with 1024 × 256 resolution and a frame rate of 25 fps. We exploit the overlapping fields-of-view between the cameras with different exposures to create an HDR radiance map. We propose a method for HDR frame reconstruction which merges the previous HDR imaging techniques with the algorithms for panorama reconstruction. The developed FPGA-based processing system is able to reconstruct the HDR frame using the proposed method and tone map the resulting image using a hardware-adapted global operator. The measured throughput of the system is 245 MB/s, which is, up to our knowledge, among the fastest HDR video processing systems

    Compression, Modeling, and Real-Time Rendering of Realistic Materials and Objects

    Get PDF
    The realism of a scene basically depends on the quality of the geometry, the illumination and the materials that are used. Whereas many sources for the creation of three-dimensional geometry exist and numerous algorithms for the approximation of global illumination were presented, the acquisition and rendering of realistic materials remains a challenging problem. Realistic materials are very important in computer graphics, because they describe the reflectance properties of surfaces, which are based on the interaction of light and matter. In the real world, an enormous diversity of materials can be found, comprising very different properties. One important objective in computer graphics is to understand these processes, to formalize them and to finally simulate them. For this purpose various analytical models do already exist, but their parameterization remains difficult as the number of parameters is usually very high. Also, they fail for very complex materials that occur in the real world. Measured materials, on the other hand, are prone to long acquisition time and to huge input data size. Although very efficient statistical compression algorithms were presented, most of them do not allow for editability, such as altering the diffuse color or mesostructure. In this thesis, a material representation is introduced that makes it possible to edit these features. This makes it possible to re-use the acquisition results in order to easily and quickly create deviations of the original material. These deviations may be subtle, but also substantial, allowing for a wide spectrum of material appearances. The approach presented in this thesis is not based on compression, but on a decomposition of the surface into several materials with different reflection properties. Based on a microfacette model, the light-matter interaction is represented by a function that can be stored in an ordinary two-dimensional texture. Additionally, depth information, local rotations, and the diffuse color are stored in these textures. As a result of the decomposition, some of the original information is inevitably lost, therefore an algorithm for the efficient simulation of subsurface scattering is presented as well. Another contribution of this work is a novel perception-based simplification metric that includes the material of an object. This metric comprises features of the human visual system, for example trichromatic color perception or reduced resolution. The proposed metric allows for a more aggressive simplification in regions where geometric metrics do not simplif
    corecore