12 research outputs found

    A simplified HDR image processing pipeline for digital photography

    Get PDF
    High Dynamic Range (HDR) imaging has revolutionized the digital imaging. It allows capture, storage, manipulation, and display of full dynamic range of the captured scene. As a result, it has spawned whole new possibilities for digital photography, from photorealistic to hyper-real. With all these advantages, the technique is expected to replace the conventional 8-bit Low Dynamic Range (LDR) imaging in the future. However, HDR results in an even more complex imaging pipeline including new techniques for capturing, encoding, and displaying images. The goal of this thesis is to bridge the gap between conventional imaging pipeline to the HDR’s in as simple a way as possible. We make three contributions. First we show that a simple extension of gamma encoding suffices as a representation to store HDR images. Second, gamma as a control for image contrast can be ‘optimally’ tuned on a per image basis. Lastly, we show a general tone curve, with detail preservation, suffices to tone map an image (there is only a limited need for the expensive spatially varying tone mappers). All three of our contributions are evaluated psychophysically. Together they support our general thesis that an HDR workflow, similar to that already used in photography, might be used. This said, we believe the adoption of HDR into photography is, perhaps, less difficult than it is sometimes posed to be

    Methods for Improving the Tone Mapping for Backward Compatible High Dynamic Range Image and Video Coding

    Get PDF
    International audienceBackward compatibility for high dynamic range image and video compression forms one of the essential requirements in the transition phase from low dynamic range (LDR) displays to high dynamic range (HDR) displays. In a recent work [1], the problems of tone mapping and HDR video coding are originally fused together in the same mathematical framework, and an optimized solution for tone mapping is achieved in terms of the mean square error (MSE) of the logarithm of luminance values. In this paper, we improve this pioneer study in three aspects by considering its three shortcomings. First, the proposed method [1] works over the logarithms of luminance values which are not uniform with respect to Human Visual System (HVS) sensitivity. We propose to use the perceptually uniform luminance values as an alternative for the optimization of tone mapping curve. Second, the proposed method [1] does not take the quality of the resulting tone mapped images into account during the formulation in contrary to the main goal of tone mapping research. We include the LDR image quality as a constraint to the optimization problem and develop a generic methodology to compromise the trade-off between HDR and LDR image qualities for coding. Third, the proposed method [1] simply applies a low-pass filter to the generated tone curves for video frames to avoid flickering during the adaptation of the method to the video. We instead include an HVS based flickering constraint to the optimization and derive a methodology to compromise the trade-off between the rate-distortion performance and flickering distortion. The superiority of the proposed methodologies is verified with experiments on HDR images and video sequences

    A comparative survey on high dynamic range video compression

    Get PDF
    International audienceHigh dynamic range (HDR) video compression has until now been approached by using the high profile of existing state-of-the-art H.264/AVC (Advanced Video Coding) codec or by separately encoding low dynamic range (LDR) video and the residue resulted from the estimation of HDR video from LDR video. Although the latter approach has a distinctive advantage of providing backward compatibility to 8-bit LDR displays, the superiority of one approach to the other in terms of the rate distortion trade-off has not been verified yet. In this paper, we first give a detailed overview of the methods in these two approaches. Then, we experimentally compare two approaches with respect to different objective and perceptual metrics, such as HDR mean square error (HDR MSE), perceptually uniform peak signal to noise ratio (PU PSNR) and HDR visible difference predictor (HDR VDP). We first conclude that the optimized methods for backward compatibility to 8-bit LDR displays are superior to the method designed for high profile encoder both for 8-bit and 12-bit mappings in terms of all metrics. Second, using higher bit-depths with a high profile encoder is giving better rate-distortion performances than employing an 8-bit mapping with an 8-bit encoder for the same method, in particular when the dynamic range of the video sequence is high. Third, rather than encoding of the residue signal in backward compatible methods, changing the quantization step size of the LDR layer encoder would be sufficient to achieve a required quality. In other words, the quality of tone mapping is more important than residue encoding for the performance of HDR image and video coding

    High dynamic range video compression exploiting luminance masking

    Get PDF

    Stereoscopic high dynamic range imaging

    Get PDF
    Two modern technologies show promise to dramatically increase immersion in virtual environments. Stereoscopic imaging captures two images representing the views of both eyes and allows for better depth perception. High dynamic range (HDR) imaging accurately represents real world lighting as opposed to traditional low dynamic range (LDR) imaging. HDR provides a better contrast and more natural looking scenes. The combination of the two technologies in order to gain advantages of both has been, until now, mostly unexplored due to the current limitations in the imaging pipeline. This thesis reviews both fields, proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the challenges that need to be resolved to enable SHDR and focuses on capture and compression aspects of that pipeline. The problems of capturing SHDR images that would potentially require two HDR cameras and introduce ghosting, are mitigated by capturing an HDR and LDR pair and using it to generate SHDR images. A detailed user study compared four different methods of generating SHDR images. Results demonstrated that one of the methods may produce images perceptually indistinguishable from the ground truth. Insights obtained while developing static image operators guided the design of SHDR video techniques. Three methods for generating SHDR video from an HDR-LDR video pair are proposed and compared to the ground truth SHDR videos. Results showed little overall error and identified a method with the least error. Once captured, SHDR content needs to be efficiently compressed. Five SHDR compression methods that are backward compatible are presented. The proposed methods can encode SHDR content to little more than that of a traditional single LDR image (18% larger for one method) and the backward compatibility property encourages early adoption of the format. The work presented in this thesis has introduced and advanced capture and compression methods for the adoption of SHDR imaging. In general, this research paves the way for a novel field of SHDR imaging which should lead to improved and more realistic representation of captured scenes

    Changing Object Appearance by Adding Fur

    Get PDF
    Cílem této práce je demonstrovat možnost renderování srsti přímo do existujících obrazů bez toho, aby bylo po uživateli požadováno překreslení všech pixelů nebo dodání kompletní 3D geometrie a osvětlení. Srst je přidána na povrch objektů pomocí extrakce jejich přibližného tvaru a světelných informací z obrazu a takto získaný objekt je poté přerenderován. Tento přístup je nový v tom, že vysokoúrovňové úpravy obrazu (jako např. přidání srsti), mohou úspěšně vést k vizuálně korektním výsledkům a to i přes omezení nepřesnou geometrií a světelnými podmínkami. Relativně velká množina technik použitých v této práci zahrnuje obrazy s velkým dynamickým rozsahem, metody extrakce 3D tvaru z obrazu, výsledky výzkumu vnímání tvaru a osvětlení a fotorealistické renderování. Hlavním cílem práce je potvrdit koncept popsaný výše. Hlavním implementačním jazykem bylo C++ s použitím knihoven wxWidgets, OpenGL a libTIFF. Renderování bylo realizováno v software 3Delight kompatibilním se standardem Renderman, za pomoci množiny shaderů implementovaných v nativním jazyce Rendermanu.The aim of this thesis is to demonstrate the feasibility of rendering fur directly into existing images without the need to either painstakingly paint over all pixels, or to supply 3D geometry and lighting. The fur is added to objects depicted on images by first recovering depth and lighting information, and then re-rendering the resulting 2.5D geometry with fur. The novelty of this approach lies in the fact that complex high-level image edits, such as the addition of fur, can successfully yield perceptually plausible results, even constrained by imperfect depth and lighting information. A relatively large set of techniques involved in this work includes HDR imaging, shape-from-shading techniques, research on shape and lighting perception in images and photorealistic rendering techniques. The main purpose of this thesis is to prove the concept of the described approach. The main implementation language was C++ with usage of wxWidgets, OpenGL and libTIFF libraries, rendering was realised in 3Delight, a Renderman-compatible renderer, with the help of a set of custom shaders written in Renderman shading language.

    Facial recognition system applied to multipurpose assistance robot for social human-robot interaction (MASHI)

    Get PDF
    Face recognition is one of the key areas in the field of pattern recognition and artificial intelligence (AI). It has been used in a wide range of applications, such as identity authentication, biometrics, and surveillance. Image data is high dimensional in the face recognition area, so requires a considerable amount of computing resources and time for recognition. Research effort has been developed in this way, and nowadays many algorithms are available for solving this problem in Computer Vision. The main goal of this project is to improve the capabilities of the MASHI robot, endowing it for more interaction with humans, and add new functionalities with the components that the robot has. FISHERFACES, a popular technique for facial recognition is the one chosen to be implemented in our application. This work studies the mathematical fundamentals of this technique to understand how information is processed to perform face recognition. Then, some tests have been performed to check the reliability of the application with several databases of facial images. In this way, it is possible to determine the strengths and weaknesses of the algorithm to be implemented in our robot. This work introduces an implementation based on Python using the OpenCV library. The characterization of hardware and the description of software is presented. Next, results, limitations, future works, and conclusions over the job development are presented

    Enhancing Mesh Deformation Realism: Dynamic Mesostructure Detailing and Procedural Microstructure Synthesis

    Get PDF
    Propomos uma solução para gerar dados de mapas de relevo dinâmicos para simular deformações em superfícies macias, com foco na pele humana. A solução incorpora a simulação de rugas ao nível mesoestrutural e utiliza texturas procedurais para adicionar detalhes de microestrutura estáticos. Oferece flexibilidade além da pele humana, permitindo a geração de padrões que imitam deformações em outros materiais macios, como couro, durante a animação. As soluções existentes para simular rugas e pistas de deformação frequentemente dependem de hardware especializado, que é dispendioso e de difícil acesso. Além disso, depender exclusivamente de dados capturados limita a direção artística e dificulta a adaptação a mudanças. Em contraste, a solução proposta permite a síntese dinâmica de texturas que se adaptam às deformações subjacentes da malha de forma fisicamente plausível. Vários métodos foram explorados para sintetizar rugas diretamente na geometria, mas sofrem de limitações como auto-interseções e maiores requisitos de armazenamento. A intervenção manual de artistas na criação de mapas de rugas e mapas de tensão permite controle, mas pode ser limitada em deformações complexas ou onde maior realismo seja necessário. O nosso trabalho destaca o potencial dos métodos procedimentais para aprimorar a geração de padrões de deformação dinâmica, incluindo rugas, com maior controle criativo e sem depender de dados capturados. A incorporação de padrões procedimentais estáticos melhora o realismo, e a abordagem pode ser estendida além da pele para outros materiais macios.We propose a solution for generating dynamic heightmap data to simulate deformations for soft surfaces, with a focus on human skin. The solution incorporates mesostructure-level wrinkles and utilizes procedural textures to add static microstructure details. It offers flexibility beyond human skin, enabling the generation of patterns mimicking deformations in other soft materials, such as leater, during animation. Existing solutions for simulating wrinkles and deformation cues often rely on specialized hardware, which is costly and not easily accessible. Moreover, relying solely on captured data limits artistic direction and hinders adaptability to changes. In contrast, our proposed solution provides dynamic texture synthesis that adapts to underlying mesh deformations. Various methods have been explored to synthesize wrinkles directly to the geometry, but they suffer from limitations such as self-intersections and increased storage requirements. Manual intervention by artists using wrinkle maps and tension maps provides control but may be limited to the physics-based simulations. Our research presents the potential of procedural methods to enhance the generation of dynamic deformation patterns, including wrinkles, with greater creative control and without reliance on captured data. Incorporating static procedural patterns improves realism, and the approach can be extended to other soft-materials beyond skin

    Realistic Visualization of Animated Virtual Cloth

    Get PDF
    Photo-realistic rendering of real-world objects is a broad research area with applications in various different areas, such as computer generated films, entertainment, e-commerce and so on. Within photo-realistic rendering, the rendering of cloth is a subarea which involves many important aspects, ranging from material surface reflection properties and macroscopic self-shadowing to animation sequence generation and compression. In this thesis, besides an introduction to the topic plus a broad overview of related work, different methods to handle major aspects of cloth rendering are described. Material surface reflection properties play an important part to reproduce the look & feel of materials, that is, to identify a material only by looking at it. The BTF (bidirectional texture function), as a function of viewing and illumination direction, is an appropriate representation of reflection properties. It captures effects caused by the mesostructure of a surface, like roughness, self-shadowing, occlusion, inter-reflections, subsurface scattering and color bleeding. Unfortunately a BTF data set of a material consists of hundreds to thousands of images, which exceeds current memory size of personal computers by far. This work describes the first usable method to efficiently compress and decompress a BTF data for rendering at interactive to real-time frame rates. It is based on PCA (principal component analysis) of the BTF data set. While preserving the important visual aspects of the BTF, the achieved compression rates allow the storage of several different data sets in main memory of consumer hardware, while maintaining a high rendering quality. Correct handling of complex illumination conditions plays another key role for the realistic appearance of cloth. Therefore, an upgrade of the BTF compression and rendering algorithm is described, which allows the support of distant direct HDR (high-dynamic-range) illumination stored in environment maps. To further enhance the appearance, macroscopic self-shadowing has to be taken into account. For the visualization of folds and the life-like 3D impression, these kind of shadows are absolutely necessary. This work describes two methods to compute these shadows. The first is seamlessly integrated into the illumination part of the rendering algorithm and optimized for static meshes. Furthermore, another method is proposed, which allows the handling of dynamic objects. It uses hardware-accelerated occlusion queries for the visibility determination. In contrast to other algorithms, the presented algorithm, despite its simplicity, is fast and produces less artifacts than other methods. As a plus, it incorporates changeable distant direct high-dynamic-range illumination. The human perception system is the main target of any computer graphics application and can also be treated as part of the rendering pipeline. Therefore, optimization of the rendering itself can be achieved by analyzing human perception of certain visual aspects in the image. As a part of this thesis, an experiment is introduced that evaluates human shadow perception to speedup shadow rendering and provides optimization approaches. Another subarea of cloth visualization in computer graphics is the animation of the cloth and avatars for presentations. This work also describes two new methods for automatic generation and compression of animation sequences. The first method to generate completely new, customizable animation sequences, is based on the concept of finding similarities in animation frames of a given basis sequence. Identifying these similarities allows jumps within the basis sequence to generate endless new sequences. Transmission of any animated 3D data over bandwidth-limited channels, like extended networks or to less powerful clients requires efficient compression schemes. The second method included in this thesis in the animation field is a geometry data compression scheme. Similar to the BTF compression, it uses PCA in combination with clustering algorithms to segment similar moving parts of the animated objects to achieve high compression rates in combination with a very exact reconstruction quality.Realistische Visualisierung von animierter virtueller Kleidung Das photorealistisches Rendering realer Gegenstände ist ein weites Forschungsfeld und hat Anwendungen in vielen Bereichen. Dazu zählen Computer generierte Filme (CGI), die Unterhaltungsindustrie und E-Commerce. Innerhalb dieses Forschungsbereiches ist das Rendern von photorealistischer Kleidung ein wichtiger Bestandteil. Hier reichen die wichtigen Aspekte, die es zu berücksichtigen gilt, von optischen Materialeigenschaften über makroskopische Selbstabschattung bis zur Animationsgenerierung und -kompression. In dieser Arbeit wird, neben der Einführung in das Thema, ein weiter Überblick über ähnlich gelagerte Arbeiten gegeben. Der Schwerpunkt der Arbeit liegt auf den wichtigen Aspekten der virtuellen Kleidungsvisualisierung, die oben beschrieben wurden. Die optischen Reflektionseigenschaften von Materialoberflächen spielen eine wichtige Rolle, um das so genannte look & feel von Materialien zu charakterisieren. Hierbei kann ein Material vom Nutzer identifiziert werden, ohne dass er es direkt anfassen muss. Die BTF (bidirektionale Texturfunktion)ist eine Funktion die abhängig von der Blick- und Beleuchtungsrichtung ist. Daher ist sie eine angemessene Repräsentation von Reflektionseigenschaften. Sie enthält Effekte wie Rauheit, Selbstabschattungen, Verdeckungen, Interreflektionen, Streuung und Farbbluten, die durch die Mesostruktur der Oberfläche hervorgerufen werden. Leider besteht ein BTF Datensatz eines Materials aus hunderten oder tausenden von Bildern und sprengt damit herkömmliche Hauptspeicher in Computern bei weitem. Diese Arbeit beschreibt die erste praktikable Methode, um BTF Daten effizient zu komprimieren, zu speichern und für Echtzeitanwendungen zum Visualisieren wieder zu dekomprimieren. Die Methode basiert auf der Principal Component Analysis (PCA), die Daten nach Signifikanz ordnet. Während die PCA die entscheidenen visuellen Aspekte der BTF erhält, können mit ihrer Hilfe Kompressionsraten erzielt werden, die es erlauben mehrere BTF Materialien im Hauptspeicher eines Consumer PC zu verwalten. Dies erlaubt ein High-Quality Rendering. Korrektes Verwenden von komplexen Beleuchtungssituationen spielt eine weitere, wichtige Rolle, um Kleidung realistisch erscheinen zu lassen. Daher wird zudem eine Erweiterung des BTF Kompressions- und Renderingalgorithmuses erläutert, die den Einsatz von High-Dynamic Range (HDR) Beleuchtung erlaubt, die in environment maps gespeichert wird. Um die realistische Erscheinung der Kleidung weiter zu unterstützen, muss die makroskopische Selbstabschattung integriert werden. Für die Visualisierung von Falten und den lebensechten 3D Eindruck ist diese Art von Schatten absolut notwendig. Diese Arbeit beschreibt daher auch zwei Methoden, diese Schatten schnell und effizient zu berechnen. Die erste ist nahtlos in den Beleuchtungspart des obigen BTF Renderingalgorithmuses integriert und für statische Geometrien optimiert. Die zweite Methode behandelt dynamische Objekte. Dazu werden hardwarebeschleunigte Occlusion Queries verwendet, um die Sichtbarkeitsberechnung durchzuführen. Diese Methode ist einerseits simpel und leicht zu implementieren, anderseits ist sie schnell und produziert weniger Artefakte, als vergleichbare Methoden. Zusätzlich ist die Verwendung von veränderbarer, entfernter HDR Beleuchtung integriert. Das menschliche Wahrnehmungssystem ist das eigentliche Ziel jeglicher Anwendung in der Computergrafik und kann daher selbst als Teil einer erweiterten Rendering Pipeline gesehen werden. Daher kann das Rendering selbst optimiert werden, wenn man die menschliche Wahrnehmung verschiedener visueller Aspekte der berechneten Bilder analysiert. Teil der vorliegenden Arbeit ist die Beschreibung eines Experimentes, das menschliche Schattenwahrnehmung untersucht, um das Rendern der Schatten zu beschleunigen. Ein weiteres Teilgebiet der Kleidungsvisualisierung in der Computergrafik ist die Animation der Kleidung und von Avataren für Präsentationen. Diese Arbeit beschreibt zwei neue Methoden auf diesem Teilgebiet. Einmal ein Algorithmus, der für die automatische Generierung neuer Animationssequenzen verwendet werden kann und zum anderen einen Kompressionsalgorithmus für eben diese Sequenzen. Die automatische Generierung von völlig neuen, anpassbaren Animationen basiert auf dem Konzept der Ähnlichkeitssuche. Hierbei werden die einzelnen Schritte von gegebenen Basisanimationen auf Ähnlichkeiten hin untersucht, die zum Beispiel die Geschwindigkeiten einzelner Objektteile sein können. Die Identifizierung dieser Ähnlichkeiten erlaubt dann Sprünge innerhalb der Basissequenz, die dazu benutzt werden können, endlose, neue Sequenzen zu erzeugen. Die Übertragung von animierten 3D Daten über bandbreitenlimitierte Kanäle wie ausgedehnte Netzwerke, Mobilfunk oder zu sogenannten thin clients erfordert eine effiziente Komprimierung. Die zweite, in dieser Arbeit vorgestellte Methode, ist ein Kompressionsschema für Geometriedaten. Ähnlich wie bei der Kompression von BTF Daten wird die PCA in Verbindung mit Clustering benutzt, um die animierte Geometrie zu analysieren und in sich ähnlich bewegende Teile zu segmentieren. Diese erkannten Segmente lassen sich dann hoch komprimieren. Der Algorithmus arbeitet automatisch und erlaubt zudem eine sehr exakte Rekonstruktionsqualität nach der Dekomprimierung

    RGBE vs Modified TIFF for encoding high dynamic range images

    No full text
    High Dynamic Range (HDR) imaging has become more widespread in consumer imaging in the past few years, due to the emergence of methods for the recovering HDR radiance maps from multiple photographs [10]. In the domain of HDR encoding, the RGBE radiance format (.hdr) is one of the most widely used. However, conventional image editing applications do not always support this encoding and those that do take considerable time to read or write HDR images (compared with more conventional formats) and this hinders workflow productivity. In this paper we propose a simple, fast, and practical framework to extend the conventional 12 and 16-bit/channel integer TIFF gamma-encoded image format for storing such a wide dynamic range. We consider the potential of our framework for the tone-mapping application both by measuring the ?E S-CIELAB color difference between original and encoded image, and by conducting a psychophysical experiment to evaluate the perceptual image quality of the proposed framework and compare it with an RGBE radiance encoding. The preliminary results show that our encoding frameworks work well for all images of a 65 image dataset, and give equivalent results compared to RGBE radiance formats, while both consuming much less computational cost and removing the need for a separate image coding format. The results suggest that our method, used in the normal tone mapping workflow, is a good candidate for HDR encoding and could easily be integrated with the existing TIFF image library
    corecore