348 research outputs found

    Уменьшение динамического диапазона инфракрасных изображений на основе блочно-приоритетного выравнивания и сжатия гистограмм

    Get PDF
    Рассматривается задача уменьшения динамического диапазона инфракрасных изображений для их воспроизведения на устройствах отображения с узким динамическим диапазоном. Исследуется метод адаптивного выравнивания гистограммы изображения на основе интегральной функции распределения яркости. Для преобразования яркости пиксела этот метод использует аппроксимацию локальных значений выравнивания ближайших блоков пикселов, на которые делится исходное изображение. Это повышает локальный контраст изображения, но приводит к высокой вычислительной сложности, которая растет с уменьшением размера блока. Целью работы является снижение вычислительной сложности адаптивного выравнивания и сжатия гистограмм инфракрасных изображений при уменьшении их динамического диапазона

    GPU-Based Local Tone Mapping in the Context of Virtual Night Driving

    Get PDF
    Virtual prototyping of automotive headlights requires a realistic illumination model, capable of rendering scenes of high contrast in fine detail. Due to the high dynamic range (HDR) nature of headlight beam-pattern data, which is projected onto the virtual road, high dynamic range illumination models are required. These are used as the basis for illumination in simulations for automotive headlight virtual prototyping. Since high dynamic range illumination models operate on brightness ranges commensurate with the real world, a postprocessing operation, known as tone mapping, is required to map each frame into the device-specific range of the display hardware. Algorithms for tone mapping, called tone-mapping operators, can be classified as global or local. Global operators are efficient to compute at the expense of scene quality. Local operators preserve scene detail, but, due to their additional computational complexity, are rarely used with interactive applications. Local tone-mapping methods produce more usable visualization results for engineering tasks. This paper proposes a local tone-mapping method suitable for use with interactive applications. To develop a suitable tone-mapping operator, a state of the art local tone-mapping method was accelerated using modern, work-efficient GPU (graphics processing unit) algorithms. Optimal performance, both in terms of memory and speed, was achieved by means of general-purpose GPU programming with CUDA (compute unified device architecture). A prototypic implementation has shown that the method works well with high dynamic range OpenGL applications. In the near future, the tone mapper will be integrated into the virtual night driving simulator at our institute

    Gain compensation across LIDAR scans

    Get PDF
    High-end Terrestrial Lidar Scanners are often equipped with RGB cameras that are used to colorize the point samples. Some of these scanners produce panoramic HDR images by encompassing the information of multiple pictures with different exposures. Unfortunately, exported RGB color values are not in an absolute color space, and thus point samples with similar reflectivity values might exhibit strong color differences depending on the scan the sample comes from. These color differences produce severe visual artifacts if, as usual, multiple point clouds colorized independently are combined into a single point cloud. In this paper we propose an automatic algorithm to minimize color differences among a collection of registered scans. The basic idea is to find correspondences between pairs of scans, i.e. surface patches that have been captured by both scans. If the patches meet certain requirements, their colors should match in both scans. We build a graph from such pair-wise correspondences, and solve for the gain compensation factors that better uniformize color across scans. The resulting panoramas can be used to colorize the point clouds consistently. We discuss the characterization of good candidate matches, and how to find such correspondences directly on the panorama images instead of in 3D space. We have tested this approach to uniformize color across scans acquired with a Leica RTC360 scanner, with very good results.This work has been partially supported by the project TIN2017-88515-C2-1-R funded by MCIN/AEI/10.13039/5011000- 11033/FEDER ‘‘A way to make Europe’’, by the EU Horizon 2020, JPICH Conservation, Protection and Use initiative (JPICH-0127) and the Spanish Agencia Estatal de Invesigación (grant PCI2020- 111979), by the Universidad Rey Juan Carlos through the Distinguished Researcher position INVESDIST-04 under the call from 17/12/2020, and a Maria Zambrano research fellowship at Universitat Politècnica de Catalunya funded by Ministerio de Universidades.Peer ReviewedPostprint (published version

    Real-time noise-aware tone mapping

    Get PDF
    Real-time high quality video tone mapping is needed for many applications, such as digital viewfinders in cameras, display algorithms which adapt to ambient light, in-camera processing, rendering engines for video games and video post-processing. We propose a viable solution for these applications by designing a video tone-mapping operator that controls the visibility of the noise, adapts to display and viewing environment, minimizes contrast distortions, preserves or enhances image details, and can be run in real-time on an incoming sequence without any preprocessing. To our knowledge, no existing solution offers all these features. Our novel contributions are: a fast procedure for computing local display-adaptive tone-curves which minimize contrast distortions, a fast method for detail enhancement free from ringing artifacts, and an integrated video tone-mapping solution combining all the above features.This project was funded by the Swedish Foundation for Strategic Research (SSF) through grant IIS11-0081, Linkoping University Center for Industrial Information Technology (CENIIT), the Swedish Research Council through the Linnaeus Environment CADICS, and through COST Action IC1005

    Evaluation of the color image and video processing chain and visual quality management for consumer systems

    Get PDF
    With the advent of novel digital display technologies, color processing is increasingly becoming a key aspect in consumer video applications. Today’s state-of-the-art displays require sophisticated color and image reproduction techniques in order to achieve larger screen size, higher luminance and higher resolution than ever before. However, from color science perspective, there are clearly opportunities for improvement in the color reproduction capabilities of various emerging and conventional display technologies. This research seeks to identify potential areas for improvement in color processing in a video processing chain. As part of this research, various processes involved in a typical video processing chain in consumer video applications were reviewed. Several published color and contrast enhancement algorithms were evaluated, and a novel algorithm was developed to enhance color and contrast in images and videos in an effective and coordinated manner. Further, a psychophysical technique was developed and implemented for performing visual evaluation of color image and consumer video quality. Based on the performance analysis and visual experiments involving various algorithms, guidelines were proposed for the development of an effective color and contrast enhancement method for images and video applications. It is hoped that the knowledge gained from this research will help build a better understanding of color processing and color quality management methods in consumer video

    High Dynamic Range Image Compression On Commodity Hardware For Real-Time Mapping Applications

    Get PDF
    This paper describes a lossy compression scheme for high dynamic range graylevel and color imagery for data transmission purposes in real-time mapping scenarios. The five stages of the implemented non-standard transform coder are written in portable C++ code and do not require specialized hardware to run. Storage space occupied by the bitmaps is reduced via a color space change, 2D integer discrete cosine transform (DCT) approximation, coefficient quantization, two-size run-length encoding and dictionary matching hinged on the LZ4 algorithm. Quantization matrices to eliminate insignificant DCT coefficients are derived from a representative image set through genetic optimization. The underlying fitness function incorporates the obtained output size, classic image quality metrics and the unique color count. Together with a zone-based adaptation mechanism, this allows to specify target bitrates instead of percentage values or abstract quality factors for the reduction rate to be directly matched to the available communication channel capacities. Results on a camera control unit of a fixed-wing unmanned aircraft system built around entry-level PC hardware revealed single-thread compression and decompression throughputs of several hundred mebibytes per second for full-swing 16 and 32 bit RGB imagery at medium compression ratios. A degradation in image quality compared to popular compression libraries could be identified, however, at acceptable levels statistically and visually

    Weighted Least Squares Based Detail Enhanced Exposure Fusion

    Get PDF

    Edge-enhancing image smoothing.

    Get PDF
    Xu, Yi.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 62-69).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Organization --- p.4Chapter 2 --- Background and Motivation --- p.7Chapter 2.1 --- ID Mondrian Smoothing --- p.9Chapter 2.2 --- 2D Formulation --- p.13Chapter 3 --- Solver --- p.16Chapter 3.1 --- More Analysis --- p.20Chapter 4 --- Edge Extraction --- p.26Chapter 4.1 --- Related work --- p.26Chapter 4.2 --- Method and Results --- p.28Chapter 4.3 --- Summary --- p.32Chapter 5 --- Image Abstraction and Pencil Sketching --- p.35Chapter 5.1 --- Related Work --- p.35Chapter 5.2 --- Method and Results --- p.36Chapter 5.3 --- Summary --- p.40Chapter 6 --- Clip-Art Compression Artifact Removal --- p.41Chapter 6.1 --- Related work --- p.41Chapter 6.2 --- Method and Results --- p.43Chapter 6.3 --- Summary --- p.46Chapter 7 --- Layer-Based Contrast Manipulation --- p.49Chapter 7.1 --- Related Work --- p.49Chapter 7.2 --- Method and Results --- p.50Chapter 7.2.1 --- Edge Adjustment --- p.51Chapter 7.2.2 --- Detail Magnification --- p.54Chapter 7.2.3 --- Tone Mapping --- p.55Chapter 7.3 --- Summary --- p.56Chapter 8 --- Conclusion and Discussion --- p.59Bibliography --- p.6

    Doctor of Philosophy

    Get PDF
    dissertationInteractive editing and manipulation of digital media is a fundamental component in digital content creation. One media in particular, digital imagery, has seen a recent increase in popularity of its large or even massive image formats. Unfortunately, current systems and techniques are rarely concerned with scalability or usability with these large images. Moreover, processing massive (or even large) imagery is assumed to be an off-line, automatic process, although many problems associated with these datasets require human intervention for high quality results. This dissertation details how to design interactive image techniques that scale. In particular, massive imagery is typically constructed as a seamless mosaic of many smaller images. The focus of this work is the creation of new technologies to enable user interaction in the formation of these large mosaics. While an interactive system for all stages of the mosaic creation pipeline is a long-term research goal, this dissertation concentrates on the last phase of the mosaic creation pipeline - the composition of registered images into a seamless composite. The work detailed in this dissertation provides the technologies to fully realize interactive editing in mosaic composition on image collections ranging from the very small to massive in scale

    Blickpunktabhängige Computergraphik

    Get PDF
    Contemporary digital displays feature multi-million pixels at ever-increasing refresh rates. Reality, on the other hand, provides us with a view of the world that is continuous in space and time. The discrepancy between viewing the physical world and its sampled depiction on digital displays gives rise to perceptual quality degradations. By measuring or estimating where we look, gaze-contingent algorithms aim at exploiting the way we visually perceive to remedy visible artifacts. This dissertation presents a variety of novel gaze-contingent algorithms and respective perceptual studies. Chapter 4 and 5 present methods to boost perceived visual quality of conventional video footage when viewed on commodity monitors or projectors. In Chapter 6 a novel head-mounted display with real-time gaze tracking is described. The device enables a large variety of applications in the context of Virtual Reality and Augmented Reality. Using the gaze-tracking VR headset, a novel gaze-contingent render method is described in Chapter 7. The gaze-aware approach greatly reduces computational efforts for shading virtual worlds. The described methods and studies show that gaze-contingent algorithms are able to improve the quality of displayed images and videos or reduce the computational effort for image generation, while display quality perceived by the user does not change.Moderne digitale Bildschirme ermöglichen immer höhere Auflösungen bei ebenfalls steigenden Bildwiederholraten. Die Realität hingegen ist in Raum und Zeit kontinuierlich. Diese Grundverschiedenheit führt beim Betrachter zu perzeptuellen Unterschieden. Die Verfolgung der Aug-Blickrichtung ermöglicht blickpunktabhängige Darstellungsmethoden, die sichtbare Artefakte verhindern können. Diese Dissertation trägt zu vier Bereichen blickpunktabhängiger und wahrnehmungstreuer Darstellungsmethoden bei. Die Verfahren in Kapitel 4 und 5 haben zum Ziel, die wahrgenommene visuelle Qualität von Videos für den Betrachter zu erhöhen, wobei die Videos auf gewöhnlicher Ausgabehardware wie z.B. einem Fernseher oder Projektor dargestellt werden. Kapitel 6 beschreibt die Entwicklung eines neuartigen Head-mounted Displays mit Unterstützung zur Erfassung der Blickrichtung in Echtzeit. Die Kombination der Funktionen ermöglicht eine Reihe interessanter Anwendungen in Bezug auf Virtuelle Realität (VR) und Erweiterte Realität (AR). Das vierte und abschließende Verfahren in Kapitel 7 dieser Dissertation beschreibt einen neuen Algorithmus, der das entwickelte Eye-Tracking Head-mounted Display zum blickpunktabhängigen Rendern nutzt. Die Qualität des Shadings wird hierbei auf Basis eines Wahrnehmungsmodells für jeden Bildpixel in Echtzeit analysiert und angepasst. Das Verfahren hat das Potenzial den Berechnungsaufwand für das Shading einer virtuellen Szene auf ein Bruchteil zu reduzieren. Die in dieser Dissertation beschriebenen Verfahren und Untersuchungen zeigen, dass blickpunktabhängige Algorithmen die Darstellungsqualität von Bildern und Videos wirksam verbessern können, beziehungsweise sich bei gleichbleibender Bildqualität der Berechnungsaufwand des bildgebenden Verfahrens erheblich verringern lässt
    corecore