349 research outputs found

    PARALLEL √3-SUBDIVISION with ANIMATION in CONSIDERATION of GEOMETRIC COMPLEXITY

    Get PDF
    We look at the broader field of geometric subdivision and the emerging field of parallel computing for the purpose of creating higher visual fidelity at an efficient pace. Primarily, we present a parallel algorithm for √3-Subdivision. When considering animation, we find that it is possible to do subdivision by providing only one variable input, with the rest being considered static. This reduces the amount of data transfer required to continually update a subdividing mesh. We can support recursive subdivision by applying the technique in passes. As a basis for analysis, we look at performance in an OpenCL implementation that utilizes a local graphics processing unit (GPU) and a parallel CPU. By overcoming current hardware limitations, we present an environment where general GPU computation of √3-Subdivision can be practical

    AlSub: Fully Parallel and Modular Subdivision

    Full text link
    In recent years, mesh subdivision---the process of forging smooth free-form surfaces from coarse polygonal meshes---has become an indispensable production instrument. Although subdivision performance is crucial during simulation, animation and rendering, state-of-the-art approaches still rely on serial implementations for complex parts of the subdivision process. Therefore, they often fail to harness the power of modern parallel devices, like the graphics processing unit (GPU), for large parts of the algorithm and must resort to time-consuming serial preprocessing. In this paper, we show that a complete parallelization of the subdivision process for modern architectures is possible. Building on sparse matrix linear algebra, we show how to structure the complete subdivision process into a sequence of algebra operations. By restructuring and grouping these operations, we adapt the process for different use cases, such as regular subdivision of dynamic meshes, uniform subdivision for immutable topology, and feature-adaptive subdivision for efficient rendering of animated models. As the same machinery is used for all use cases, identical subdivision results are achieved in all parts of the production pipeline. As a second contribution, we show how these linear algebra formulations can effectively be translated into efficient GPU kernels. Applying our strategies to 3\sqrt{3}, Loop and Catmull-Clark subdivision shows significant speedups of our approach compared to state-of-the-art solutions, while we completely avoid serial preprocessing.Comment: Changed structure Added content Improved description

    QAS: Real-time Quadratic Approximation of Subdivision Surfaces.

    Get PDF
    International audienceWe introduce QAS, an efficient quadratic approximation of subdivision surfaces which offers a very close appearance compared to the true subdivision surface but avoids recursion, providing at least one order of magnitude faster rendering. QAS uses enriched polygons, equipped with edge vertices, and replaces them on-the-fly with low degree polynomials for interpolating positions and normals. By systematically projecting the vertices of the input coarse mesh at their limit position on the subdivision surface, the visual quality of the approximation is good enough for imposing only a single subdivision step, followed by our patch fitting, which allows real-time performances for million polygons output. Additionally, the parametric nature of the approximation offers an efficient adaptive sampling for rendering and displacement mapping. Last, the hexagonal support associated to each coarse triangle is adapted to geometry processors

    Ptex system for advanced texture mapping

    Get PDF
    Ptex is a texture mapping method. It’s main advantage is the elimination of UV mapping process by storing separate texture for each face. Ptex deals with the filtering across face boundaries, which makes all the little separate textures look seamless, as if they were one big texture. Ptex file format can efficiently store hundreds of thousands of images into a single file. Ptex API is released as open source, written in C++, which enables implementation of Ptex in various programs. This paper analyses advantages and drawbacks of this system and examines if Ptex is today usable outside of big production studios. Analysis leads us to a conclusion that Ptex can already be used by individuals outside of big production studios

    A parallel algorithm for construction of uniform grids

    Full text link

    Interactive simulation and rendering of fluids on graphics hardware

    Get PDF
    Computational uid dynamics can be used to reproduce the complex motion of fluids for use in computer graphics, but the simulation and rendering are both highly computationally intensive. In the past performing these tasks on the CPU could take many minutes per frame, especially for large scale scenes at high levels of detail, which limited their usage to offline applications such as in film and media. However, using the massive parallelism of GPUs, it is nowadays possible to produce uid visual effects in real time for interactive applications such as games. We present such an interactive simulation using the CUDA GPU computing environment and OpenGL graphics API. Smoothed Particle Hydrodynamics (SPH) is a popular particle-based fluid simulation technique that has been shown to be well suited to acceleration on the GPU. Our work extends an existing GPU-based SPH implementation by incorporating rigid body interaction and rendering. Solid objects are represented using particles to accumulate hydrodynamic forces from surrounding fluid, while motion and collision handling are handled by the Bullet Physics library on the CPU. Our system demonstrates two-way coupling with multiple objects floating, displacing fluid and colliding with each other. For rendering we compare the performance and memory consumption of two approaches, splatting and raycasting, we also describe the visual characteristics of each. In our evaluation we consider a target of between 24 and 30 fps to be sufficient for smooth interaction and aim to determine the performance impact of our new features. We begin by establishing a performance baseline and find that the original system runs smoothly up to 216,000 fluid particles but after introducing rendering this drops to 27,000 particles with the rendering taking up the majority of the frame time in both techniques. We find that the most significant limiting factor to splatting performance to be the onscreen area occupied by fluid while the raycasting performance is primarily determined by the resolution of the 3D texture used for sampling. Finally we find that performing solid interaction on the CPU is a viable approach that does not introduce significant overhead unless solid particles vastly outnumber fluid ones

    Efficient multi-bounce lightmap creation using GPU forward mapping

    Get PDF
    Computer graphics can nowadays produce images in realtime that are hard to distinguish from photos of a real scene. One of the most important aspects to achieve this is the interaction of light with materials in the virtual scene. The lighting computation can be separated in two different parts. The first part is concerned with the direct illumination that is applied to all surfaces lit by a light source; algorithms related to this have been greatly improved over the last decades and together with the improvements of the graphics hardware can now produce realistic effects. The second aspect is about the indirect illumination which describes the multiple reflections of light from each surface. In reality, light that hits a surface is never fully absorbed, but instead reflected back into the scene. And even this reflected light is then reflected again and again until its energy is depleted. These multiple reflections make indirect illumination very computationally expensive. The first problem regarding indirect illumination is therefore, how it can be simplified to compute it faster. Another question concerning indirect illumination is, where to compute it. It can either be computed in the fixed image that is created when rendering the scene or it can be stored in a light map. The drawback of the first approach is, that the results need to be recomputed for every frame in which the camera changed. The second approach, on the other hand, is already used for a long time. Once a static scene has been set up, the lighting situation is computed regardless of the time it takes and the result is then stored into a light map. This is a texture atlas for the scene in which each surface point in the virtual scene has exactly one surface point in the 2D texture atlas. When displaying the scene with this approach, the indirect illumination does not need to be recomputed, but is simply sampled from the light map. The main contribution of this thesis is the development of a technique that computes the indirect illumination solution for a scene at interactive rates and stores the result into a light atlas for visualizing it. To achieve this, we overcome two main obstacles. First, we need to be able to quickly project data from any given camera configuration into the parts of the texture that are currently used for visualizing the 3D scene. Since our approach for computing and storing indirect illumination requires a huge amount of these projections, it needs to be as fast as possible. Therefore, we introduce a technique that does this projection entirely on the graphics card with a single draw call. Second, the reflections of light into the scene need to be computed quickly. Therefore, we separate the computation into two steps, one that quickly approximates the spreading of the light into the scene and a second one that computes the visually smooth final result using the aforementioned projection technique. The final technique computes the indirect illumination at interactive rates even for big scenes. It is furthermore very flexible to let the user choose between high quality results or fast computations. This allows the method to be used for quickly editing the lighting situation with high speed previews and then computing the final result in perfect quality at still interactive rates. The technique introduced for projecting data into the texture atlas is in itself highly flexible and also allows for fast painting onto objects and projecting data onto it, considering all perspective distortions and self-occlusions

    Approximation of Subdivision Surfaces for Interactive Applications

    Get PDF
    International audienceIn this sketch, we propose a visually plausible approximation subdivision surfaces for interactive applications. The complete idea is discussed in the full paper "QAS: Realtime Quadratic Approximation of Subdivision Surfaces", published in the proceedings of Pacific Graphics 2007 and available online (http://iparla.labri.fr/publications/2007/BS07c/)

    Realtime ray tracing and interactive global illumination

    Get PDF
    One of the most sought-for goals in computer graphics is to generate "realism in real time". i.e. the generation of realistically looking images at realtime frame rates. Today, virtually all approaches towards realtime rendering use graphics hardware, which is based almost exclusively on triangle rasterization. Unfortunately, though this technology has seen tremendous progress over the last few years, for many applications it is currently reaching its limits in both model complexity, supported features, and achievable realism. An alternative to triangle rasterizations is the ray tracing algorithm, which is well-known for its higher flexibility, its generally higher achievable realism, and its superior scalability in both model size and compute power. However, ray tracing is also computationally demanding and thus so far is used almost exclusively for high-quality offline rendering tasks. This dissertation focuses on the question why ray tracing is likely to soon play a larger role for interactive applications, and how this scenario can be reached. To this end, we discuss the RTRT/OpenRT realtime ray tracing system, a software based ray tracing system that achieves interactive to realtime frame rates on todays commodity CPUs. In particular, we discuss the overall system design, the efficient implementation of the core ray tracing algorithms, techniques for handling dynamic scenes, an efficient parallelization framework, and an OpenGL-like low-level API. Taken together, these techniques form a complete realtime rendering engine that supports massively complex scenes, highley realistic and physically correct shading, and even physically based lighting simulation at interactive rates. In the last part of this thesis we then discuss the implications and potential of realtime ray tracing on global illumination, and how the availability of this new technology can be leveraged to finally achieve interactive global illumination - the physically correct simulation of light transport at interactive rates.Eines der wichtigsten Ziele der Computer-Graphik ist die Generierung von "Realismus in Echtzeit\u27; — die Erzeugung von realistisch wirkenden, computer- generierten Bildern in Echtzeit. Heutige Echtzeit-Graphikanwendungen werden derzeit zum ĂŒberwiegenden Teil mit schneller Graphik-Hardware realisiert, welche zum aktuellen Stand der Technik fast ausschliesslich auf dem Dreiecksrasterisierungsalgorithmus basiert. Obwohl diese Rasterisierungstechnologie in den letzten Jahren zunehmend beeindruckende Fortschritte gemacht hat, stĂ¶ĂŸt sie heutzutage zusehends an ihre Grenzen, speziell im Hinblick auf ModellkomplexitĂ€t, unterstĂŒtzte Beleuchtungseffekte, und erreichbaren Realismus. Eine Alternative zur Dreiecksrasterisierung ist das "Ray-Tracing\u27; (Stahl-RĂŒckverfolgung), welches weithin bekannt ist fĂŒr seine höhere FlexibilitĂ€t, seinen im Großen und Ganzen höheren erreichbaren Realismus, und seine bessere Skalierbarkeit sowohl in SzenengrĂ¶ĂŸe als auch in Rechner-KapazitĂ€ten. Allerdings ist Ray-Tracing ebenso bekannt fĂŒr seinen hohen Rechenbedarf, und wird daher heutzutage fast ausschließlich fĂŒr die hochqualitative, nichtinteraktive Bildsynthese benutzt. Diese Dissertation behandelt die GrĂŒnde warum Ray-Tracing in nĂ€herer Zukunft voraussichtlich eine grĂ¶ĂŸere Rolle fĂŒr interaktive Graphikanwendungen spielen wird, und untersucht, wie dieses Szenario des Echtzeit Ray-Tracing erreicht werden kann. HierfĂŒr stellen wir das RTRT/OpenRT Echtzeit Ray-Tracing System vor, ein software-basiertes Ray-Tracing System, welches es erlaubt, interaktive Performanz auf heutigen Standard-PC-Prozessoren zu erreichen. Speziell diskutieren wir das grundlegende System-Design, die effiziente Implementierung der Kern-Algorithmen, Techniken zur UnterstĂŒtzung von dynamischen Szenen, ein effizientes Parallelisierungs-Framework, und eine OpenGL-Ă€hnliche Anwendungsschnittstelle. In ihrer Gesamtheit formen diese Techniken ein komplettes Echtzeit-Rendering-System, welches es erlaubt, extrem komplexe Szenen, hochgradig realistische und physikalisch korrekte Effekte, und sogar physikalisch-basierte Beleuchtungssimulation interaktiv zu berechnen. Im letzten Teil der Dissertation behandeln wir dann die Implikationen und das Potential, welches Echtzeit Ray-Tracing fĂŒr die Globale Beleuchtungssimulation bietet, und wie die VerfĂŒgbarkeit dieser neuen Technologie benutzt werden kann, um letztendlich auch Globale Belechtung — die physikalisch korrekte Simulation des Lichttransports — interaktiv zu berechnen

    Interactive Sound Propagation for Massive Multi-user and Dynamic Virtual Environments

    Get PDF
    Hearing is an important sense and it is known that rendering sound effects can enhance the level of immersion in virtual environments. Modeling sound waves is a complex problem, requiring vast computing resources to solve accurately. Prior methods are restricted to static scenes or limited acoustic effects. In this thesis, we present methods to improve the quality and performance of interactive geometric sound propagation in dynamic scenes and precomputation algorithms for acoustic propagation in enormous multi-user virtual environments. We present a method for finding edge diffraction propagation paths on arbitrary 3D scenes for dynamic sources and receivers. Using this algorithm, we present a unified framework for interactive simulation of specular reflections, diffuse reflections, diffraction scattering, and reverberation effects. We also define a guidance algorithm for ray tracing that responds to dynamic environments and reorders queries to minimize simulation time. Our approach works well on modern GPUs and can achieve more than an order of magnitude performance improvement over prior methods. Modern multi-user virtual environments support many types of client devices, and current phones and mobile devices may lack the resources to run acoustic simulations. To provide such devices the benefits of sound simulation, we have developed a precomputation algorithm that efficiently computes and stores acoustic data on a server in the cloud. Using novel algorithms, the server can render enhanced spatial audio in scenes spanning several square kilometers for hundreds of clients in realtime. Our method provides the benefits of immersive audio to collaborative telephony, video games, and multi-user virtual environments.Doctor of Philosoph
    • 

    corecore