4 research outputs found

    Theory and algorithms for efficient physically-based illumination

    Get PDF
    Realistic image synthesis is one of the central fields of study within computer graphics. This thesis treats efficient methods for simulating light transport in situations where the incident illumination is produced by non-pointlike area light sources and distant illumination described by environment maps. We describe novel theory and algorithms for physically-based lighting computations, and expose the design choices and tradeoffs on which the techniques are based. Two publications included in this thesis deal with precomputed light transport. These techniques produce interactive renderings of static scenes under dynamic illumination and full global illumination effects. This is achieved through sacrificing the ability to freely deform and move the objects in the scene. We present a comprehensive mathematical framework for precomputed light transport. The framework, which is given as an abstract operator equation that extends the well-known rendering equation, encompasses a significant amount of prior work as its special cases. We also present a particular method for rendering objects in low-frequency lighting environments, where increased efficiency is gained through the use of compactly supported function bases. Physically-based shadows from area and environmental light sources are an important factor in perceived image realism. We present two algorithms for shadow computation. The first technique computes shadows cast by low-frequency environmental illumination on animated objects at interactive rates without requiring difficult precomputation or a priori knowledge of the animations. Here the capability to animate is gained by forfeiting indirect illumination. Another novel shadow algorithm for off-line rendering significantly enhances a previous physically-based soft shadow technique by introducing an improved spatial hierarchy that alleviates redundant computations at the cost of using more memory. This thesis advances the state of the art in realistic image synthesis by introducing several algorithms that are more efficient than their predecessors. Furthermore, the theoretical contributions should enable the transfer of ideas from one particular application to others through abstract generalization of the underlying mathematical concepts.Tämä tutkimus käsittelee realististen kuvien syntetisointia tietokoneella tilanteissa, jossa virtuaalisen ympäristön valonlähteet ovat fysikaalisesti mielekkäitä. Fysikaalisella mielekkyydellä tarkoitetaan sitä, että valonlähteet eivät ole idealisoituja eli pistemäisiä, vaan joko tavanomaisia pinta-alallisia valoja tai kaukaisia ympäristövalokenttiä (environment maps). Väitöskirjassa esitetään uusia algoritmeja, jotka soveltuvat matemaattisesti perusteltujen valaistusapproksimaatioiden laskentaan erilaisissa käyttötilanteissa. Esilaskettu valonkuljetus on yleisnimi reaaliaikaisille menetelmille, jotka tuottavat kuvia staattisista ympäristöistä siten, että valaistus voi muuttua ajon aikana vapaasti ennalta määrätyissä rajoissa. Tässä työssä esitetään esilasketulle valonkuljetukselle kattava matemaattinen kehys, joka selittää erikoistapauksinaan suuren määrän aiempaa tutkimusta. Kehys annetaan abstraktin lineaarisen operaattoriyhtälön muodossa, ja se yleistää tunnettua kuvanmuodostusyhtälöä (rendering equation). Työssä esitetään myös esilasketun valonkuljetuksen algoritmi, joka parantaa aiempien vastaavien menetelmien tehokkuutta esittämällä valaistuksen funktiokannassa, jonka ominaisuuksien vuoksi ajonaikainen laskenta vähenee huomattavasti. Fysikaalisesti mielekkäät valonlähteet tuottavat pehmeäreunaisia varjoja. Työssä esitetään uusi algoritmi pehmeiden varjojen laskemiseksi liikkuville ja muotoaan muuttaville kappaleille, joita valaisee matalataajuinen ympäristövalokenttä. Useimmista aiemmista menetelmistä poiketen algoritmi ei vaadi esitietoa siitä, kuinka kappale voi muuttaa muotoaan ajon aikana. Muodonmuutoksen aiheuttaman suuren laskentakuorman vuoksi epäsuoraa valaistusta ei huomioida. Työssä esitetään myös toinen uusi algoritmi pehmeiden varjojen laskemiseksi, jossa aiemman varjotilavuuksiin (shadow volumes) perustuvan algoritmin tehokkuutta parannetaan merkittävästi uuden hierarkkisen avaruudellisen hakurakenteen avulla. Uusi rakenne vähentää epäoleellista laskentaa muistinkulutuksen kustannuksella. Työssä esitetään aiempaa tehokkaampia algoritmeja fysikaalisesti perustellun valaistuksen laskentaan. Niiden lisäksi työn esilaskettua valonkuljetusta koskevat teoreettiset tulokset yleistävät suuren joukon aiempaa tutkimusta ja mahdollistavat näin ideoiden siirron erityisalalta toiselle.reviewe

    Spatial integration in computer-augmented realities

    Get PDF
    In contrast to virtual reality, which immerses the user in a wholly computergenerated perceptual environment, augmented reality systems superimpose virtual entities on the user's view of the real world. This concept promises to fulfil new applications in a wide range of fields, but there are some challenging issues to be resolved. One issue relates to achieving accurate registration of virtual and real worlds. Accurate spatial registration is not only required with respect to lateral positioning, but also in depth. A limiting problem with existing optical-see-through displays, typically used for augmenting reality, is that they are incapable of displaying a full range of depth cues. Most significantly, they are unable to occlude real background and hence cannot produce interposition depth cueing. Neither are they able to modify the real-world view in the ways required to produce convincing common illumination effects such as virtual shadows across real surfaces. Also, at present, there are no wholly satisfactory ways of determining suitable common illumination models with which to determine the real-virtual light interactions necessary for producing such depth cues. This thesis establishes that interpositioning is essential for appropriate estimation of depth in augmented realities, and that the presence of shadows provides an important refining cue. It also extends the concept of a transparency alpha-channel to allow optical-see-through systems to display appropriate depth cues. The generalised theory of the approach is described mathematically and algorithms developed to automate generation of display-surface images. Three practical physical display strategies are presented; using a transmissive mask, selective lighting using digital projection, and selective reflection using digital micromirror devices. With respect to obtaining a common illumination model, all current approaches require either . prior knowledge of the light sources illuminating the real scene, or involve inserting some kind of probe into the scene with which to determine real light source position, shape, and intensity. This thesis presents an alternative approach that infers a plausible illumination from a limited view of the scene.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    NSB 2023 - Book of Technical Papers - 13th Nordic Symposium on Building Physics

    Get PDF
    corecore