340 research outputs found

    Real-time 3D rendering of water using CUDA

    Get PDF
    This thesis addresses the real-time simulation of 3D water, both on the CPU and on the GPU. The stable fluids method is extended to 3D, and implemented both on the CPU and on the GPU. The GPU-based implementation is done using the NVIDIA Compute Unified Device Architecture API (Application Programming Interface), shortly CUDA. The stable fluids method requires the use of an iterative sparse linear system solver. Therefore, three solvers were implemented on both CPU and GPU, namely Jacobi, Gauss-Seidel, and Conjugate Gradient solvers. Rendering of water or its velocities, of the moving obstacles, of the static obstacles, and of the world are done using Vertex Buffer Objects (VBOs). In the CPU-based version standard OpenGL VBOs are used, while on the GPU-based version OpenGL-CUDA interoperability VBOs and standard OpenGL VBOs are used

    Output-Sensitive Rendering of Detailed Animated Characters for Crowd Simulation

    Get PDF
    High-quality, detailed animated characters are often represented as textured polygonal meshes. The problem with this technique is the high cost that involves rendering and animating each one of these characters. This problem has become a major limiting factor in crowd simulation. Since we want to render a huge number of characters in real-time, the purpose of this thesis is therefore to study the current existing approaches in crowd rendering to derive a novel approach. The main limitations we have found when using impostors are (1) the big amount of memory needed to store them, which also has to be sent to the graphics card, (2) the lack of visual quality in close-up views, and (3) some visibility problems. As we wanted to overcome these limitations, and improve performance results, the found conclusions lead us to present a new representation for 3D animated characters using relief mapping, thus supporting an output-sensitive rendering. The basic idea of our approach is to encode each character through a small collection of textured boxes storing color and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone in the animated skeleton. A fragment shader is used to recover the original geometry using an adapted version of relief mapping. Unlike competing output-sensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax e ects. Furthermore, the proposed approach ensures correct visibility among di erent animated parts, and it does not require us to prede ne the animation sequences nor to select a subset of discrete views. Finally, a user study demonstrates that our approach allows for a large number of simulated agents with negligible visual artifacts

    Doctor of Philosophy in Computing

    Get PDF
    dissertationThe aim of direct volume rendering is to facilitate exploration and understanding of three-dimensional scalar fields referred to as volume datasets. Improving understanding is done by improving depth perception, whereas facilitating exploration is done by speeding up volume rendering. In this dissertation, improving both depth perception and rendering speed is considered. The impact of depth of field (DoF) on depth perception in direct volume rendering is evaluated by conducting a user study in which the test subjects had to choose which of two features, located at different depths, appeared to be in front in a volume-rendered image. Whereas DoF was expected to improve perception in all cases, the user study revealed that if used on the back feature, DoF reduced depth perception, whereas it produced a marked improvement when used on the front feature. We then worked on improving the speed of volume rendering on distributed memory machines. Distributed volume rendering has three stages: loading, rendering, and compositing. In this dissertation, the focus is on image compositing, more specifically, trying to optimize communication in image compositing algorithms. For that, we have developed the Task Overlapped Direct Send Tree image compositing algorithm, which works on both CPU- and GPU-accelerated supercomputers, which focuses on communication avoidance and overlapping communication with computation; the Dynamically Scheduled Region-Based image compositing algorithm that uses spatial and temporal awareness to efficiently schedule communication among compositing nodes, and a rendering and compositing pipeline that allows both image compositing and rendering to be done on GPUs of GPU-accelerated supercomputers. We tested these on CPU- and GPU-accelerated supercomputers and explain how these improvements allow us to obtain better performance than image compositing algorithms that focus on load-balancing and algorithms that have no spatial and temporal awareness of the rendering and compositing stages

    Output-Sensitive Rendering of Detailed Animated Characters for Crowd Simulation

    Get PDF
    High-quality, detailed animated characters are often represented as textured polygonal meshes. The problem with this technique is the high cost that involves rendering and animating each one of these characters. This problem has become a major limiting factor in crowd simulation. Since we want to render a huge number of characters in real-time, the purpose of this thesis is therefore to study the current existing approaches in crowd rendering to derive a novel approach. The main limitations we have found when using impostors are (1) the big amount of memory needed to store them, which also has to be sent to the graphics card, (2) the lack of visual quality in close-up views, and (3) some visibility problems. As we wanted to overcome these limitations, and improve performance results, the found conclusions lead us to present a new representation for 3D animated characters using relief mapping, thus supporting an output-sensitive rendering. The basic idea of our approach is to encode each character through a small collection of textured boxes storing color and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone in the animated skeleton. A fragment shader is used to recover the original geometry using an adapted version of relief mapping. Unlike competing output-sensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax e ects. Furthermore, the proposed approach ensures correct visibility among di erent animated parts, and it does not require us to prede ne the animation sequences nor to select a subset of discrete views. Finally, a user study demonstrates that our approach allows for a large number of simulated agents with negligible visual artifacts

    Three--dimensional medical imaging: Algorithms and computer systems

    Get PDF
    This paper presents an introduction to the field of three-dimensional medical imaging It presents medical imaging terms and concepts, summarizes the basic operations performed in three-dimensional medical imaging, and describes sample algorithms for accomplishing these operations. The paper contains a synopsis of the architectures and algorithms used in eight machines to render three-dimensional medical images, with particular emphasis paid to their distinctive contributions. It compares the performance of the machines along several dimensions, including image resolution, elapsed time to form an image, imaging algorithms used in the machine, and the degree of parallelism used in the architecture. The paper concludes with general trends for future developments in this field and references on three-dimensional medical imaging

    Coherent and Holographic Imaging Methods for Immersive Near-Eye Displays

    Get PDF
    Lähinäytöt on suunniteltu tarjoamaan realistisia kolmiulotteisia katselukokemuksia, joille on merkittävää tarvetta esimerkiksi työkoneiden etäkäytössä ja 3D-suunnittelussa. Nykyaikaiset lähinäytöt tuottavat kuitenkin edelleen ristiriitaisia visuaalisia vihjeitä, jotka heikentävät immersiivistä kokemusta ja haittaavat niiden miellyttävää käyttöä. Merkittävänä ratkaisuvaihtoehtona pidetään koherentin valon, kuten laservalon, käyttöä näytön valaistukseen, millä voidaan korjata nykyisten lähinäyttöjen puutteita. Erityisesti koherentti valaistus mahdollistaa holografisen kuvantamisen, jota käyttävät holografiset näytöt voivat tarkasti jäljitellä kolmiulotteisten mallien todellisia valoaaltoja. Koherentin valon käyttäminen näyttöjen valaisemiseen aiheuttaa kuitenkin huomiota vaativaa korkean kontrastin häiriötä pilkkukuvioiden muodossa. Lisäksi holografisten näyttöjen laskentamenetelmät ovat laskennallisesti vaativia ja asettavat uusia haasteita analyysin, pilkkuhäiriön ja valon mallintamisen suhteen. Tässä väitöskirjassa tutkitaan laskennallisia menetelmiä lähinäytöille koherentissa kuvantamisjärjestelmässä käyttäen signaalinkäsittelyä, koneoppimista sekä geometrista (säde) ja fysikaalista (aalto) optiikan mallintamista. Työn ensimmäisessä osassa keskitytään holografisten kuvantamismuotojen analysointiin sekä kehitetään hologrammien laskennallisia menetelmiä. Holografian korkeiden laskentavaatimusten ratkaisemiseksi otamme käyttöön holografiset stereogrammit holografisen datan likimääräisenä esitysmuotona. Tarkastelemme kyseisen esitysmuodon visuaalista oikeellisuutta kehittämällä analyysikehyksen holografisen stereogrammin tarjoamien visuaalisten vihjeiden tarkkuudelle akkommodaatiota varten suhteessa sen suunnitteluparametreihin. Lisäksi ehdotamme signaalinkäsittelyratkaisua pilkkuhäiriön vähentämiseksi, ratkaistaksemme nykyisten menetelmien valon mallintamiseen liittyvät visuaalisia artefakteja aiheuttavat ongelmat. Kehitämme myös uudenlaisen holografisen kuvantamismenetelmän, jolla voidaan mallintaa tarkasti valon käyttäytymistä haastavissa olosuhteissa, kuten peiliheijastuksissa. Väitöskirjan toisessa osassa lähestytään koherentin näyttökuvantamisen laskennallista taakkaa koneoppimisen avulla. Kehitämme koherentin akkommodaatioinvariantin lähinäytön suunnittelukehyksen, jossa optimoidaan yhtäaikaisesti näytön staattista optiikka ja näytön kuvan esikäsittelyverkkoa. Lopuksi nopeutamme ehdottamaamme uutta holografista kuvantamismenetelmää koneoppimisen avulla reaaliaikaisia sovelluksia varten. Kyseiseen ratkaisuun sisältyy myös tehokkaan menettelyn kehittäminen funktionaalisten satunnais-3D-ympäristöjen tuottamiseksi. Kehittämämme menetelmä mahdollistaa suurten synteettisten moninäkökulmaisten kuvien datasettien tuottamisen, joilla voidaan kouluttaa sopivia neuroverkkoja mallintamaan holografista kuvantamismenetelmäämme reaaliajassa. Kaiken kaikkiaan tässä työssä kehitettyjen menetelmien osoitetaan olevan erittäin kilpailukykyisiä uusimpien koherentin valon lähinäyttöjen laskentamenetelmien kanssa. Työn tuloksena nähdään kaksi vaihtoehtoista lähestymistapaa ristiriitaisten visuaalisten vihjeiden aiheuttamien nykyisten lähinäyttöongelmien ratkaisemiseksi joko staattisella tai dynaamisella optiikalla ja reaaliaikaiseen käyttöön soveltuvilla laskentamenetelmillä. Esitetyt tulokset ovat näin ollen tärkeitä seuraavan sukupolven immersiivisille lähinäytöille.Near-eye displays have been designed to provide realistic 3D viewing experience, strongly demanded in applications, such as remote machine operation, entertainment, and 3D design. However, contemporary near-eye displays still generate conflicting visual cues which degrade the immersive experience and hinders their comfortable use. Approaches using coherent, e.g., laser light for display illumination have been considered prominent for tackling the current near-eye display deficiencies. Coherent illumination enables holographic imaging whereas holographic displays are expected to accurately recreate the true light waves of a desired 3D scene. However, the use of coherent light for driving displays introduces additional high contrast noise in the form of speckle patterns, which has to be taken care of. Furthermore, imaging methods for holographic displays are computationally demanding and impose new challenges in analysis, speckle noise and light modelling. This thesis examines computational methods for near-eye displays in the coherent imaging regime using signal processing, machine learning, and geometrical (ray) and physical (wave) optics modeling. In the first part of the thesis, we concentrate on analysis of holographic imaging modalities and develop corresponding computational methods. To tackle the high computational demands of holography, we adopt holographic stereograms as an approximative holographic data representation. We address the visual correctness of such representation by developing a framework for analyzing the accuracy of accommodation visual cues provided by a holographic stereogram in relation to its design parameters. Additionally, we propose a signal processing solution for speckle noise reduction to overcome existing issues in light modelling causing visual artefacts. We also develop a novel holographic imaging method to accurately model lighting effects in challenging conditions, such as mirror reflections. In the second part of the thesis, we approach the computational complexity aspects of coherent display imaging through deep learning. We develop a coherent accommodation-invariant near-eye display framework to jointly optimize static display optics and a display image pre-processing network. Finally, we accelerate the corresponding novel holographic imaging method via deep learning aimed at real-time applications. This includes developing an efficient procedure for generating functional random 3D scenes for forming a large synthetic data set of multiperspective images, and training a neural network to approximate the holographic imaging method under the real-time processing constraints. Altogether, the methods developed in this thesis are shown to be highly competitive with the state-of-the-art computational methods for coherent-light near-eye displays. The results of the work demonstrate two alternative approaches for resolving the existing near-eye display problems of conflicting visual cues using either static or dynamic optics and computational methods suitable for real-time use. The presented results are therefore instrumental for the next-generation immersive near-eye displays

    Logarithmic perspective shadow maps

    Get PDF
    The shadow map algorithm is a popular approach for generating shadows for real-time applications. Shadow maps are flexible and easy to implement, but they are prone to aliasing artifacts. To reduce aliasing artifacts we introduce logarithmic perspective shadow maps (LogPSMs). LogPSMs are based on a novel shadow map parameterization that consists of a perspective projection and a logarithmic transformation. They can be used for both point and directional light sources to produce hard shadows. To establish the benefits of LogPSMs, we perform an in-depth analysis of shadow map aliasing error and the error characteristics of existing algorithms. Using this analysis we compute a parameterization that produces near-optimal perspective aliasing error. This parameterization has high arithmetical complexity which makes it less practical than existing methods. We show, however, that over all light positions, the simpler LogPSM parameterization produces the same maximum error as the near-optimal parameterization. We also show that compared with competing algorithms, LogPSMs produce significantly less aliasing error. Equivalently, for the same error as competing algorithms, LogPSMs require significantly less storage and bandwidth. We demonstrate difference in shadow quality achieved with LogPSMs on several models of varying complexity. LogPSMs are rendered using logarithmic rasterization. We show how current GPU architectures can be modified incrementally to perform logarithmic rasterization at current GPU fill rates. Specifically, we modify the rasterizer to support rendering to a nonuniform grid with the same watertight rasterization properties as current rasterizers. We also describe a novel depth compression scheme to handle the nonlinear primitives produced by logarithmic rasterization. Our proposed architecture enhancements align with current trends of decreasing cost for on-chip computation relative to off-chip bandwidth and storage. For only a modest increase in computation, logarithmic rasterization can greatly reduce shadow map bandwidth and storage costs
    corecore