12 research outputs found

    Simultaneous Image Registration and Monocular Volumetric Reconstruction of a fluid flow

    Get PDF
    We propose to combine image registration and volumetric reconstruction from a monocular video of a draining off Hele-Shaw cell filled with water. A Hele-Shaw cell is a tank whose depth is small (e.g. 1 mm) compared to the other dimensions (e.g. 400 800 mm2). We use a technique known as molecular tagging which consists in marking by photobleaching a pattern in the fluid and then tracking its deformations. The evolution of the pattern is filmed with a camera whose principal axis coincides with the depth of the cell. The velocity of the fluid along this direction is not constant. Consequently,tracking the pattern cannot be achieved with classical methods because what is observed is the integration of the marked particles over the entire depth of the cell. The proposed approach is built on top of classical direct image registration in which we incorporate a volumetric image formation model. It allows us to accurately measure the motion and the velocity profiles for the entire volume (including the depth of the cell) which is something usually hard to achieve. The results we obtain are consistent with the theoretical hydrodynamic behaviour for this flow which is known as the laminar Poiseuille flow

    Image registration algorithm for molecular tagging velocimetry applied to unsteady flow in Hele-Shaw cell

    Get PDF
    In order to develop velocimetry methods for confined geometries, we propose to combine image registration and volumetric reconstruction from a monocular video of the draining of a Hele-Shaw cell filled with water. The cell’s thickness is small compared to the other two dimensions (e.g. 1x400 x 800 mm3). We use a technique known as molecular tagging which consists in marking by photobleaching a pattern in the fluid and then tracking its deformations. The evolution of the pattern is filmed with a camera whose principal axis coincides with the cell’s gap. The velocity of the fluid along this direction is not constant. Consequently, tracking the pattern cannot be achieved with classical methods because what is observed is the integral of the marked molecules over the entire cell’s gap. The proposed approach is built on top of direct image registration that we extend to specifically model the volumetric image formation. It allows us to accurately measure the motion and the velocity profiles for the entire volume (including the cell’s gap) which is something usually hard to achieve. The results we obtained are consistent with the theoretical hydrodynamic behaviour for this flow which is known as the Poiseuille flow

    Neural Relightable Participating Media Rendering

    Get PDF
    Learning neural radiance fields of a scene has recently allowed realistic novel view synthesis of the scene, but they are limited to synthesize images under the original fixed lighting condition. Therefore, they are not flexible for the eagerly desired tasks like relighting, scene editing and scene composition. To tackle this problem, several recent methods propose to disentangle reflectance and illumination from the radiance field. These methods can cope with solid objects with opaque surfaces but participating media are neglected. Also, they take into account only direct illumination or at most one-bounce indirect illumination, thus suffer from energy loss due to ignoring the high-order indirect illumination. We propose to learn neural representations for participating media with a complete simulation of global illumination. We estimate direct illumination via ray tracing and compute indirect illumination with spherical harmonics. Our approach avoids computing the lengthy indirect bounces and does not suffer from energy loss. Our experiments on multiple scenes show that our approach achieves superior visual quality and numerical performance compared to state-of-the-art methods, and it can generalize to deal with solid objects with opaque surfaces as well.Comment: Accepted to NeurIPS 202

    Impacts of Fog Characteristics, Forward Illumination, and Warning Beacon Intensity Distribution on Roadway Hazard Visibility

    Get PDF
    Warning beacons are critical for the safety of transportation, construction, and utility workers. These devices need to produce sufficient luminous intensity to be visible without creating glare to drivers. Published standards for the photometric performance of warning beacons do not address their performance in conditions of reduced visibility such as fog. Under such conditions light emitted in directions other than toward approaching drivers can create scattered light that makes workers and other hazards less visible. Simulations of visibility of hazards under varying conditions of fog density, forward vehicle lighting, warning beacon luminous intensity, and intensity distribution were performed to assess their impacts on visual performance by drivers. Each of these factors can influence the ability of drivers to detect and identify workers and hazards along the roadway in work zones. Based on the results, it would be reasonable to specify maximum limits on the luminous intensity of warning beacons in directions that are unlikely to be seen by drivers along the roadway, limits which are not included in published performance specifications

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Example-Based Water Animation

    Get PDF

    Reconsidering light transport : acquisition and display of real-world reflectance and geometry

    Get PDF
    In this thesis, we cover three scenarios that violate common simplifying assumptions about the nature of light transport. We begin with the first ingredient to any çD rendering: a geometry model. Most çD scanners require the object-of-interest to show diffuse refectance. The further a material deviates from the Lambertian model, the more likely these setups are to produce corrupted results. By placing a traditional laser scanning setup in a participating (in particular, fuorescent) medium, we have built a light sheet scanner that delivers robust results for a wide range of materials, including glass. Further investigating the phenomenon of fluorescence, we notice that, despite its ubiquity, it has received moderate attention in computer graphics. In particular, to date no datadriven reflectance models of fluorescent materials have been available. To describe the wavelength-shifling reflectance of fluorescent materials, we define the bispectral bidirectional reflectance and reradiation distribution function (BRRDF), for which we introduce an image-based measurement setup as well as an efficient acquisition scheme. Finally, we envision a computer display that showsmaterials instead of colours, and present a prototypical device that can exhibit anisotropic reflectance distributions similar to common models in computer graphics.In der Computergraphik und Computervision ist es unerlässlich, vereinfachende Annahmen über die Ausbreitung von Licht zumachen. In dieser Dissertation stellen wir drei Fälle vor, in denen diese nicht zutreffen. So wird die dreidimensionale Geometrie von Gegenständen oft mit Hilfe von Laserscannern vermessen und dabei davon ausgegangen, dass ihre Oberfläche diffus reflektiert. Dies ist bei den meisten Materialien jedoch nicht gegeben, so dass die Ergebnisse oft fehlerhaft sind. Indem wir das Objekt in einem fluoreszierenden Medium einbetten, kann ein klassischer CD-Scanner-Aufbau so modifiziert werden, dass er verlässliche Geometriedaten für Objekte aus verschiedensten Materialien liefert, einschließlich Glas. Auch die akkurate Nachbildung des Aussehens von Materialien ist wichtig für die photorealistische Bildsynthese. Wieder interessieren wir uns für Fluoreszenz, diesmal allerdings für ihr charakteristisches Erscheinungsbild, das in der Computergraphik bislang kaum Beachtung gefunden hat. Wir stellen einen bildbasierten Aufbau vor, mit dem die winkel- und wellenlängenabhängige Reflektanz fluoreszierender Oberflächen ausgemessen werden kann, und eine Strategie, um solche Messungen effizient abzuwickeln. Schließlich befassen wir uns mit der Idee, nicht nur Farben dynamisch anzuzeigen, sondern auch Materialien und ihr je nach Lichteinfall und Blickwinkel unterschiedliches Aussehen. Einer generellen Beschreibung des Problems folgt die konkrete Umsetzung in Formzweier Prototypen, die verschiedene Reflektanzverteilungen auf einer Oberfläche darstellen können

    Reconstruction and rendering of time-varying natural phenomena

    Get PDF
    While computer performance increases and computer generated images get ever more realistic, the need for modeling computer graphics content is becoming stronger. To achieve photo-realism detailed scenes have to be modeled often with a significant amount of manual labour. Interdisciplinary research combining the fields of Computer Graphics, Computer Vision and Scientific Computing has led to the development of (semi-)automatic modeling tools freeing the user of labour-intensive modeling tasks. The modeling of animated content is especially challenging. Realistic motion is necessary to convince the audience of computer games, movies with mixed reality content and augmented reality applications. The goal of this thesis is to investigate automated modeling techniques for time-varying natural phenomena. The results of the presented methods are animated, three-dimensional computer models of fire, smoke and fluid flows.Durch die steigende Rechenkapazität moderner Computer besteht die Möglichkeit immer realistischere Bilder virtuell zu erzeugen. Dadurch entsteht ein größerer Bedarf an Modellierungsarbeit um die nötigen Objekte virtuell zu beschreiben. Um photorealistische Bilder erzeugen zu können müssen sehr detaillierte Szenen, oft in mühsamer Handarbeit, modelliert werden. Ein interdisziplinärer Forschungszweig, der Computergrafik, Bildverarbeitung und Wissenschaftliches Rechnen verbindet, hat in den letzten Jahren die Entwicklung von (semi-)automatischen Methoden zur Modellierung von Computergrafikinhalten vorangetrieben. Die Modellierung dynamischer Inhalte ist dabei eine besonders anspruchsvolle Aufgabe, da realistische Bewegungsabläufe sehr wichtig für eine überzeugende Darstellung von Computergrafikinhalten in Filmen, Computerspielen oder Augmented-Reality Anwendungen sind. Das Ziel dieser Arbeit ist es automatische Modellierungsmethoden für dynamische Naturerscheinungen wie Wasserfluss, Feuer, Rauch und die Bewegung erhitzter Luft zu entwickeln. Das Resultat der entwickelten Methoden sind dabei dynamische, dreidimensionale Computergrafikmodelle
    corecore