712 research outputs found

    Many-Light Real-Time Global Illumination using Sparse Voxel Octree

    Get PDF
    Global illumination (GI) rendering simulates the propagation of light through a 3D volume and its interaction with surfaces, dramatically increasing the fidelity of computer generated images. While off-line GI algorithms such as ray tracing and radiosity can generate physically accurate images, their rendering speeds are too slow for real-time applications. The many-light method is one of many novel emerging real-time global illumination algorithms. However, it requires many shadow maps to be generated for Virtual Point Light (VPL) visibility tests, which reduces its efficiency. Prior solutions restrict either the number or accuracy of shadow map updates, which may lower the accuracy of indirect illumination or prevent the rendering of fully dynamic scenes. In this thesis, we propose a hybrid real-time GI algorithm that utilizes an efficient Sparse Voxel Octree (SVO) ray marching algorithm for visibility tests instead of the shadow map generation step of the many-light algorithm. Our technique achieves high rendering fidelity at about 50 FPS, is highly scalable and can support thousands of VPLs generated on the fly. A survey of current real-time GI techniques as well as details of our implementation using OpenGL and Shader Model 5 are also presented

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    A directional occlusion shading model for interactive direct volume rendering

    Get PDF
    Volumetric rendering is widely used to examine 3D scalar fields from CT/MRI scanners and numerical simulation datasets. One key aspect of volumetric rendering is the ability to provide perceptual cues to aid in understanding structure contained in the data. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre)computation. In this paper, a shading model for interactive direct volume rendering is proposed that provides perceptual cues similar to those of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase function. The method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions while modifications to the volume via clipping planes are incorporated into the resulting occlusion-based shading

    The delta radiance field

    Get PDF
    The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well. Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production. The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand. Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now

    Artistic Path Space Editing of Physically Based Light Transport

    Get PDF
    Die Erzeugung realistischer Bilder ist ein wichtiges Ziel der Computergrafik, mit Anwendungen u.a. in der Spielfilmindustrie, Architektur und Medizin. Die physikalisch basierte Bildsynthese, welche in letzter Zeit anwendungsübergreifend weiten Anklang findet, bedient sich der numerischen Simulation des Lichttransports entlang durch die geometrische Optik vorgegebener Ausbreitungspfade; ein Modell, welches für übliche Szenen ausreicht, Photorealismus zu erzielen. Insgesamt gesehen ist heute das computergestützte Verfassen von Bildern und Animationen mit wohlgestalteter und theoretisch fundierter Schattierung stark vereinfacht. Allerdings ist bei der praktischen Umsetzung auch die Rücksichtnahme auf Details wie die Struktur des Ausgabegeräts wichtig und z.B. das Teilproblem der effizienten physikalisch basierten Bildsynthese in partizipierenden Medien ist noch weit davon entfernt, als gelöst zu gelten. Weiterhin ist die Bildsynthese als Teil eines weiteren Kontextes zu sehen: der effektiven Kommunikation von Ideen und Informationen. Seien es nun Form und Funktion eines Gebäudes, die medizinische Visualisierung einer Computertomografie oder aber die Stimmung einer Filmsequenz -- Botschaften in Form digitaler Bilder sind heutzutage omnipräsent. Leider hat die Verbreitung der -- auf Simulation ausgelegten -- Methodik der physikalisch basierten Bildsynthese generell zu einem Verlust intuitiver, feingestalteter und lokaler künstlerischer Kontrolle des finalen Bildinhalts geführt, welche in vorherigen, weniger strikten Paradigmen vorhanden war. Die Beiträge dieser Dissertation decken unterschiedliche Aspekte der Bildsynthese ab. Dies sind zunächst einmal die grundlegende Subpixel-Bildsynthese sowie effiziente Bildsyntheseverfahren für partizipierende Medien. Im Mittelpunkt der Arbeit stehen jedoch Ansätze zum effektiven visuellen Verständnis der Lichtausbreitung, die eine lokale künstlerische Einflussnahme ermöglichen und gleichzeitig auf globaler Ebene konsistente und glaubwürdige Ergebnisse erzielen. Hierbei ist die Kernidee, Visualisierung und Bearbeitung des Lichts direkt im alle möglichen Lichtpfade einschließenden "Pfadraum" durchzuführen. Dies steht im Gegensatz zu Verfahren nach Stand der Forschung, die entweder im Bildraum arbeiten oder auf bestimmte, isolierte Beleuchtungseffekte wie perfekte Spiegelungen, Schatten oder Kaustiken zugeschnitten sind. Die Erprobung der vorgestellten Verfahren hat gezeigt, dass mit ihnen real existierende Probleme der Bilderzeugung für Filmproduktionen gelöst werden können

    Polarization imaging reflectometry in the wild

    Get PDF
    We present a novel approach for on-site acquisition of surface reflectance for planar, spatially varying, isotropic materials in uncontrolled outdoor environments. Our method exploits the naturally occuring linear polarization of incident illumination: by rotating a linear polarizing filter in front of a camera at 3 different orientations, we measure the linear polarization reflected off the sample and combine this information with multiview analysis and inverse rendering in order to recover per-pixel, high resolution reflectance maps. We exploit polarization both for diffuse/specular separation and surface normals estimation by combining polarization measurements from at least two near orthogonal views close to Brewster angle of incidence. We then use our estimates of surface normals and albedos in an inverse rendering framework to recover specular roughness. To the best of our knowledge, our method is the first to successfully extract a complete set of reflectance parameters with passive capture in completely uncontrolled outdoor environments

    Daylight simulation: validation, sky models and daylight coefficients

    Get PDF
    The application of lighting simulation techniques for daylight illuminance modelling in architectural spaces is described in this thesis. The prediction tool used for all the work described here is the Radiance lighting simulation system. An overview of the features and capabilities of the Radiance system is presented. Daylight simulation using the Radiance system is described in some detail. The relation between physical quantities and the lighting simulation parameters is made clear in a series of progressively more complex examples. Effective use of the interreflection calculation is described. The illuminance calculation is validated under real sky conditions for a full-size office space. The simulation model used sky luminance patterns that were based directly on measurements. Internal illuminance predictions are compared with measurements for 754 skies that cover a wide range of naturally occurring conditions. The processing of the sky luminance measurements for the lighting simulation is described. The accuracy of the illuminance predictions is shown to be, in the main, comparable with the accuracy of the model input data. There were a number of predictions with low accuracy. Evidence is presented to show that these result from imprecision in the model specification - such as, uncertainty of the circumsolar luminance - rather than the prediction algorithms themselves. Procedures to visualise and reduce illuminance and lighting-related data are presented. The ability of sky models to reproduce measured sky luminance patterns for the purpose of predicting internal illuminance is investigated. Four sky models and two sky models blends are assessed. Predictions of internal illuminance using sky models/blends are compared against those using measured sky luminance patterns. The sky model blends and the Perez All-weather model are shown to perform comparably well. Illuminance predictions using measured skies however were invariably better than those using sky models/blends. Several formulations of the daylight coefficient approach for predicting time varying illuminances are presented. Radiance is used to predict the daylight coefficients from which internal illuminances are derived. The form and magnitude of the daylight coefficients are related to the scene geometry and the discretisation scheme. Internal illuminances are derived for four daylight coefficient formulations based on the measured luminance patterns for the 754 skies. For the best of the formulations, the accuracy of the daylight coefficient derived illuminances is shown to be comparable to that using the standard Radiance calculation method. The use of the daylight coefficient approach to both accurately and efficiently predict hourly internal daylight illuminance levels for an entire year is described. Daylight coefficients are invariant to building orientation for a fixed building configuration. This property of daylight coefficients is exploited to yield hourly internal illuminances for a full year as a function of building orientation. Visual data analysis techniques are used to display and process the massive number of derived illuminances

    Characteristics of large Martian dust devils using Mars Odyssey Thermal Emission Imaging System visual and infrared images

    Get PDF
    A search for Martian dust devils has been carried out, using Mars Odyssey Thermal Emission Imaging System (THEMIS) visible-wavelength images. Simultaneous THEMIS thermal infrared wavelength images were then processed and analyzed to investigate the thermal properties of the dust devils observed; 3079 images were checked, concentrating on northern spring, summer, and autumn (LS from 0° to 270°, 20°S to 50°N). Mars Express High Resolution Stereo Camera, Mars Global Surveyor Mars Orbiter Camera, and other THEMIS visible images were used for comparison to potentially rule out any ambiguous geological features. Eight clear examples of dust devils have been found in five separate images, with a comparable number of unconfirmed possible devils. The rarity of dust devils observed is believed to result from a combination of the difficulty in identifying dust devils in medium resolution THEMIS data and the fact that the Mars Odyssey orbit flyover local time is later in the afternoon than would be optimum for dust devil searching. The temporal distribution of dust devil activity appears to be weighted more toward later afternoon, compared to Earth, but this may be a sampling effect due to size variation with time of sol, greater coverage later in the sol, or the small-number statistics. The thermal infrared images indicate that the lofted dust in the column is cooler than the surrounding surface and must be equilibrating with the atmosphere in the dust devil. This energy transfer is estimated to be about 10% of the heat flux energy that is available to drive the systems. The ground shadowed by the dust column also appears colder than the surroundings, because of reduced solar illumination. From the visible-wavelength images, the shadows of the dust columns were used to estimate the column opacity, which in turn gave estimates of the dust loadings, which ranged from 1.9 × 10?5 to 1.5 × 10?4 kg m?3, similar to lander-based observations. No thermal or visible trails are associated with the dust devils, indicating that the surface equilibrates quickly after the devil has passed and that track counting as a dust devil survey technique must underestimate dust devil populations and consequently dust loading calculations, confirming previous work
    corecore