11 research outputs found

    Use of an Occlusion Mask for Veiling Glare Removal in HDR Images

    Get PDF
    Optical systems in digital cameras present a limit during the acquisition of standard and High Dynamic Range Images (HDRI) due to the presence of veiling glare, an artifact caused by an unwanted spread of the source of light. In this paper, we analyze the state-of-the-art of veiling glare removal in HDRI, giving attention to the paper presented by Talvala. Then we describe an algorithm for veiling glare removal based on the same occlusion mask, to study the benefits provided by it in HDRI acquisition process. Finally, we demonstrate the efficiency of the occlusion mask method in veiling glare removal without any post production estimation and subtraction

    Veiling glare removal: synthetic dataset generation, metrics and neural network architecture

    Get PDF
    In photography, the presence of a bright light source often reduces the quality and readability of the resulting image. Light rays reflect and bounce off camera elements, sensor or diaphragm causing unwanted artifacts. These artifacts are generally known as "lens flare" and may have different influences on the photo: reduce contrast of the image (veiling glare), add circular or circular-like effects (ghosting flare), appear as bright rays spreading from light source (starburst pattern), or cause aberrations. All these effects are generally undesirable, as they reduce legibility and aesthetics of the image. In this paper we address the problem of removing or reducing the effect of veiling glare on the image. There are no available large-scale datasets for this problem and no established metrics, so we start by (i) proposing a simple and fast algorithm of generating synthetic veiling glare images necessary for training and (ii) studying metrics used in related image enhancement tasks (dehazing and underwater image enhancement). We select three such no-reference metrics (UCIQE, UIQM and CCF) and show that their improvement indicates better veil removal. Finally, we experiment on neural network architectures and propose a two-branched architecture and a training procedure utilizing structural similarity measure

    Spatial–Spectral Evidence of Glare Influence on Hyperspectral Acquisitions

    Get PDF
    Glare is an unwanted optical phenomenon which affects imaging systems with optics. This paper presents for the first time a set of hyperspectral image (HSI) acquisitions and measurements to verify how glare affects acquired HSI data in standard conditions. We acquired two ColorCheckers (CCs) in three different lighting conditions, with different backgrounds, different exposure times, and different orientations. The reflectance spectra obtained from the imaging system have been compared to pointwise reference measures obtained with contact spectrophotometers. To assess and identify the influence of glare, we present the Glare Effect (GE) index, which compares the contrast of the grayscale patches of the CC in the hyperspectral images with the contrast of the reference spectra of the same patches. We evaluate, in both spatial and spectral domains, the amount of glare affecting every hyperspectral image in each acquisition scenario, clearly evidencing an unwanted light contribution to the reflectance spectra of each point, which increases especially for darker pixels and pixels close to light sources or bright patche

    Perceptually-motivated, interactive rendering and editing of global illumination

    Get PDF
    This thesis proposes several new perceptually-motivated techniques to synthesize, edit and enhance depiction of three-dimensional virtual scenes. Finding algorithms that fit the perceptually economic middle ground between artistic depiction and full physical simulation is the challenge taken in this work. First, we will present three interactive global illumination rendering approaches that are inspired by perception to efficiently depict important light transport. Those methods have in common to compute global illumination in large and fully dynamic scenes allowing for light, geometry, and material changes at interactive or real-time rates. Further, this thesis proposes a tool to edit reflections, that allows to bend physical laws to match artistic goals by exploiting perception. Finally, this work contributes a post-processing operator that depicts high contrast scenes in the same way as artists do, by simulating it "seen'; through a dynamic virtual human eye in real-time.Diese Arbeit stellt eine Anzahl von Algorithmen zur Synthese, Bearbeitung und verbesserten Darstellung von virtuellen drei-dimensionalen Szenen vor. Die Herausforderung liegt dabei in der Suche nach Ausgewogenheit zwischen korrekter physikalischer Berechnung und der kĂŒnstlerischen, durch die Gesetze der menschlichen Wahrnehmung motivierten Praxis. ZunĂ€chst werden drei Verfahren zur Bild-Synthese mit globaler Beleuchtung vorgestellt, deren Gemeinsamkeit in der effizienten Handhabung großer und dynamischer virtueller Szenen liegt, in denen sich Geometrie, Materialen und Licht frei verĂ€ndern lassen. Darauffolgend wird ein Werkzeug zum Editieren von Reflektionen in virtuellen Szenen das die menschliche Wahrnehmung ausnutzt um kĂŒnstlerische Vorgaben umzusetzen, vorgestellt. Die Arbeit schließt mit einem Filter am Ende der Verarbeitungskette, der den wahrgenommen Kontrast in einem Bild erhöht, indem er die Entstehung von Glanzeffekten im menschlichen Auge nachbildet

    New 3D scanning techniques for complex scenes

    Get PDF
    This thesis presents new 3D scanning methods for complex scenes, such as surfaces with fine-scale geometric details, translucent objects, low-albedo objects, glossy objects, scenes with interreflection, and discontinuous scenes. Starting from the observation that specular reflection is a reliable visual cue for surface mesostructure perception, we propose a progressive acquisition system that captures a dense specularity field as the only information for mesostructure reconstruction. Our method can efficiently recover surfaces with fine-scale geometric details from complex real-world objects. Translucent objects pose a difficult problem for traditional optical-based 3D scanning techniques. We analyze and compare two descattering methods, phaseshifting and polarization, and further present several phase-shifting and polarization based methods for high quality 3D scanning of translucent objects. We introduce the concept of modulation based separation, where a high frequency signal is multiplied on top of another signal. The modulated signal inherits the separation properties of the high frequency signal and allows us to remove artifacts due to global illumination. Thismethod can be used for efficient 3D scanning of scenes with significant subsurface scattering and interreflections.Diese Dissertation prĂ€sentiert neuartige Verfahren fĂŒr die 3D-Digitalisierung komplexer Szenen, wie z.B. OberflĂ€chen mit sehr feinen Strukturen, durchscheinende Objekte, GegenstĂ€nde mit geringem Albedo, glĂ€nzende Objekte, Szenen mit Lichtinterreflektionen und unzusammenhĂ€ngende Szenen. Ausgehend von der Beobachtung, daß die spekulare Reflektion ein zuverlĂ€ssiger, visueller Hinweis fĂŒr die Mesostruktur einer OberflĂ€che ist, stellen wir ein progressives Meßsystem vor, um SpekularitĂ€tsfelder zu messen. Aus diesen Feldern kann anschließend die Mesostruktur rekonstruiert werden. Mit unserer Methode können OberflĂ€chen mit sehr feinen Strukturen von komplexen, realen Objekten effizient aufgenommen werden. Durchscheinende Objekte stellen ein großes Problem fĂŒr traditionelle, optischbasierte 3D-Rekonstruktionsmethoden dar. Wir analysieren und vergleichen zwei verschiedene Methoden zum Eliminieren von Lichtstreuung (Descattering): Phasenverschiebung und Polarisation. Weiterhin prĂ€sentieren wir mehrere hochqualitative 3D-Rekonstruktionsmethoden fĂŒr durchscheinende Objekte, die auf Phasenverschiebung und Polarisation basieren. Außerdem fĂŒhren wir das Konzept der modulationsbasierten Signaltrennung ein. Hierzu wird ein hochfrequentes Signal zu einem anderes Signal multipliziert. Das so modulierte Signal erhĂ€lt damit die separierenden Eigenschaften des hochfrequenten Signals. Dies erlaubt unsMeßartefakte aufgrund von globalen Beleuchtungseffekten zu vermeiden. Dieses Verfahren kann zum effizienten 3DScannen von Szenen mit durchscheinden Objekten und Interreflektionen benutzt werden

    Stereoscopic high dynamic range imaging

    Get PDF
    Two modern technologies show promise to dramatically increase immersion in virtual environments. Stereoscopic imaging captures two images representing the views of both eyes and allows for better depth perception. High dynamic range (HDR) imaging accurately represents real world lighting as opposed to traditional low dynamic range (LDR) imaging. HDR provides a better contrast and more natural looking scenes. The combination of the two technologies in order to gain advantages of both has been, until now, mostly unexplored due to the current limitations in the imaging pipeline. This thesis reviews both fields, proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the challenges that need to be resolved to enable SHDR and focuses on capture and compression aspects of that pipeline. The problems of capturing SHDR images that would potentially require two HDR cameras and introduce ghosting, are mitigated by capturing an HDR and LDR pair and using it to generate SHDR images. A detailed user study compared four different methods of generating SHDR images. Results demonstrated that one of the methods may produce images perceptually indistinguishable from the ground truth. Insights obtained while developing static image operators guided the design of SHDR video techniques. Three methods for generating SHDR video from an HDR-LDR video pair are proposed and compared to the ground truth SHDR videos. Results showed little overall error and identified a method with the least error. Once captured, SHDR content needs to be efficiently compressed. Five SHDR compression methods that are backward compatible are presented. The proposed methods can encode SHDR content to little more than that of a traditional single LDR image (18% larger for one method) and the backward compatibility property encourages early adoption of the format. The work presented in this thesis has introduced and advanced capture and compression methods for the adoption of SHDR imaging. In general, this research paves the way for a novel field of SHDR imaging which should lead to improved and more realistic representation of captured scenes

    Daylight simulation with photon maps

    Get PDF
    Physically based image synthesis remains one of the most demanding tasks in the computer graphics field, whose applications have evolved along with the techniques in recent years, particularly with the decline in cost of powerful computing hardware. Physically based rendering is essentially a niche since it goes beyond the photorealistic look required by mainstream applications with the goal of computing actual lighting levels in physical quantities within a complex 3D scene. Unlike mainstream applications which merely demand visually convincing images and short rendering times, physically based rendering emphasises accuracy at the cost of increased computational overhead. Among the more specialised applications for physically based rendering is lighting simulation, particularly in conjunction with daylight. The aim of this thesis is to investigate the applicability of a novel image synthesis technique based on Monte Carlo particle transport to daylight simulation. Many materials used in daylight simulation are specifically designed to redirect light, and as such give rise to complex effects such as caustics. The photon map technique was chosen for its efficent handling of these effects. To assess its ability to produce physically correct results which can be applied to lighting simulation, a validation was carried out based on analytical case studies and on simple experimental setups. As prerequisite to validation, the photon map\u27s inherent bias/noise tradeoff is investigated. This tradeoff depends on the density estimate bandwidth used in the reconstruction of the illumination. The error analysis leads to the development of a bias compensating operator which adapts the bandwidth according to the estimated bias in the reconstructed illumination. The work presented here was developed at the Fraunhofer Institute for Solar Energy Systems (ISE) as part of the FARESYS project sponsored by the German national research foundation (DFG), and embedded into the RADIANCE rendering system.Die Erzeugung physikalisch basierter Bilder gilt heute noch als eine der rechenintensivsten Aufgaben in der Computergraphik, dessen Anwendungen sowie auch Verfahren in den letzten Jahren kontinuierlich weiterentwickelt wurden, vorangetrieben primĂ€r durch den Preisverfall leistungsstarker Hardware. Physikalisch basiertes Rendering hat sich als Nische etabliert, die ĂŒber die photorealistischen Anforderungen typischer Mainstream-Applikationen hinausgeht, mit dem Ziel, Lichttechnische GrĂ¶ĂŸen innerhalb einer komplexen 3D Szene zu berechnen. Im Gegensatz zu Mainstream-Applikationen, die visuell ĂŒberzeugend wirken sollen und kurze Rechenzeiten erforden, liegt der Schwerpunkt bei physikalisch basiertem Rendering in der Genauigkeit, auf Kosten des Rechenaufwands. Zu den eher spezialisierten Anwendungen im Gebiet des physikalisch basiertem Renderings gehört die Lichtsimulation, besonders in Bezug auf Tageslicht. Das Ziel dieser Dissertation liegt darin, die Anwendbarkeit eines neuartigen Renderingverfahrens basierend auf Monte Carlo Partikeltransport hinsichtlich Tageslichtsimulation zu untersuchen. Viele Materialien, die in der Tageslichtsimulation verwendet werden, sind speziell darauf konzipiert, Tageslicht umzulenken, und somit komplexe PhĂ€nomene wie Kaustiken hervorrufen. Das Photon-Map-Verfahren wurde aufgrund seiner effizienten Simulation solcher Effekte herangezogen. Zur Beurteilung seiner FĂ€higkeit, physikalisch korrekte Ergebnisse zu liefern, die in der Tageslichtsimulation anwendbar sind, wurde eine Validierung anhand analytischer Studien sowie eines einfachen experimentellen Aufbaus durchgefĂŒhrt. Als Voraussetzung zur Validierung wurde der Photon Map bezĂŒglich seiner inhĂ€renten Wechselwirkung zwischen Rauschen und systematischem Fehler (Bias) untersucht. Diese Wechselwirkung hĂ€ngt von der Bandbreite des Density Estimates ab, mit dem die Beleuchtung aus den Photonen rekonstruiert wird. Die Fehleranalyse fĂŒhrt zur Entwicklung eines Bias compensating Operators, der die Bandbreite dynamisch anhand des geschĂ€tzten Bias in der rekonstruierten Beleuchtung anpasst. Die hier vorgestellte Arbeit wurde am Fraunhofer Institut fĂŒr Solare Energiesysteme (ISE) als teil des FARESYS Projekts entwickelt, daß von der Deutschen Forschungsgemeinschaft (DFG) finanziert wurde. Die Implementierung erfolgte im Rahmen des RADIANCE Renderingsystems
    corecore