154 research outputs found

    DCT Implementation on GPU

    Get PDF
    There has been a great progress in the field of graphics processors. Since, there is no rise in the speed of the normal CPU processors; Designers are coming up with multi-core, parallel processors. Because of their popularity in parallel processing, GPUs are becoming more and more attractive for many applications. With the increasing demand in utilizing GPUs, there is a great need to develop operating systems that handle the GPU to full capacity. GPUs offer a very efficient environment for many image processing applications. This thesis explores the processing power of GPUs for digital image compression using Discrete cosine transform

    Procedural Generation and Rendering of Large-Scale Open-World Environments

    Get PDF
    Open-world video games give players a large environment to explore along with increased freedom to navigate and manipulate that environment. These requirements pose several problems that must be addressed by a game\u27s graphics engine. Often there are a large number of visible objects, such as all of the trees in a forest, as well as objects comprised of large amounts of geometry, such as terrain. An open-world graphics engine must be able to render large environments at varying levels of detail and smoothly transition between detail levels to provide a believable experience. Often this involves finding a way to both store and generate the large amounts of geometry that represent the environment. In this thesis we present a system for generating and rendering large exterior environments, with a focus on terrain and vegetation. We use a region-based procedural generation algorithm to create environments of varying types. This algorithm produces content that can be rendered at multiple levels of detail. The terrain is rendered volumetrically to support caves, overhangs, and cliffs, but is also rendered using heightmaps to allow for large view distances. Vegetation is implemented using procedurally generated meshes and impostors. The volumetric terrain is editable in real time, which limits our ability to pre-generate or cache large amounts of geometry, and also limits the number of assumptions we can make with regard to visibility. We support a view distance of at least 25 miles in each direction, though distant objects are rendered at low resolution. The heightmap terrain used to achieve this view distance consists of over 360,000 triangles. Our system runs at 180 frames per second on commodity desktop hardware

    GPU-friendly marching cubes.

    Get PDF
    Xie, Yongming.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (leaves 77-85).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Isosurfaces --- p.1Chapter 1.2 --- Graphics Processing Unit --- p.2Chapter 1.3 --- Objective --- p.3Chapter 1.4 --- Contribution --- p.3Chapter 1.5 --- Thesis Organization --- p.4Chapter 2 --- Marching Cubes --- p.5Chapter 2.1 --- Introduction --- p.5Chapter 2.2 --- Marching Cubes Algorithm --- p.7Chapter 2.3 --- Triangulated Cube Configuration Table --- p.12Chapter 2.4 --- Summary --- p.16Chapter 3 --- Graphics Processing Unit --- p.18Chapter 3.1 --- Introduction --- p.18Chapter 3.2 --- History of Graphics Processing Unit --- p.19Chapter 3.2.1 --- First Generation GPU --- p.20Chapter 3.2.2 --- Second Generation GPU --- p.20Chapter 3.2.3 --- Third Generation GPU --- p.20Chapter 3.2.4 --- Fourth Generation GPU --- p.21Chapter 3.3 --- The Graphics Pipelining --- p.21Chapter 3.3.1 --- Standard Graphics Pipeline --- p.21Chapter 3.3.2 --- Programmable Graphics Pipeline --- p.23Chapter 3.3.3 --- Vertex Processors --- p.25Chapter 3.3.4 --- Fragment Processors --- p.26Chapter 3.3.5 --- Frame Buffer Operations --- p.28Chapter 3.4 --- GPU CPU Analogy --- p.31Chapter 3.4.1 --- Memory Architecture --- p.31Chapter 3.4.2 --- Processing Model --- p.32Chapter 3.4.3 --- Limitation of GPU --- p.33Chapter 3.4.4 --- Input and Output --- p.34Chapter 3.4.5 --- Data Readback --- p.34Chapter 3.4.6 --- FramebufFer --- p.34Chapter 3.5 --- Summary --- p.35Chapter 4 --- Volume Rendering --- p.37Chapter 4.1 --- Introduction --- p.37Chapter 4.2 --- History of Volume Rendering --- p.38Chapter 4.3 --- Hardware Accelerated Volume Rendering --- p.40Chapter 4.3.1 --- Hardware Acceleration Volume Rendering Methods --- p.41Chapter 4.3.2 --- Proxy Geometry --- p.42Chapter 4.3.3 --- Object-Aligned Slicing --- p.43Chapter 4.3.4 --- View-Aligned Slicing --- p.45Chapter 4.4 --- Summary --- p.48Chapter 5 --- GPU-Friendly Marching Cubes --- p.49Chapter 5.1 --- Introduction --- p.49Chapter 5.2 --- Previous Work --- p.50Chapter 5.3 --- Traditional Method --- p.52Chapter 5.3.1 --- Scalar Volume Data --- p.53Chapter 5.3.2 --- Isosurface Extraction --- p.53Chapter 5.3.3 --- Flow Chart --- p.54Chapter 5.3.4 --- Transparent Isosurfaces --- p.56Chapter 5.4 --- Our Method --- p.56Chapter 5.4.1 --- Cell Selection --- p.59Chapter 5.4.2 --- Vertex Labeling --- p.61Chapter 5.4.3 --- Cell Indexing --- p.62Chapter 5.4.4 --- Interpolation --- p.65Chapter 5.5 --- Rendering Translucent Isosurfaces --- p.67Chapter 5.6 --- Implementation and Results --- p.69Chapter 5.7 --- Summary --- p.74Chapter 6 --- Conclusion --- p.76Bibliography --- p.7

    Hardware accelerated volume texturing.

    Get PDF
    The emergence of volume graphics, a sub field in computer graphics, has been evident for the last 15 years. Growing from scientific visualization problems, volume graphics has established itself as an important field in general computer graphics. However, the general graphics fraternity still favour the established surface graphics techniques. This is due to well founded and established techniques and a complete pipeline through software onto display hardware. This enables real-time applications to be constructed with ease and used by a wide range of end users due to the readily available graphics hardware adopted by many computer manufacturers. Volume graphics has traditionally been restricted to high-end systems due to the complexity involved with rendering volume datasets. Either specialised graphics hardware or powerful computers were required to generate images, many of these not in real-time. Although there have been specialised hardware solutions to the volume rendering problem, the adoption of the volume dataset as a primitive relies on end-users with commodity hardware being able to display images at interactive rates. The recent emergence of programmable consumer level graphics hardware is now allowing these platforms to compute volume rendering at interactive rates. Most of the work in this field is directed towards scientific visualisation. The work in this thesis addresses the issues in providing real-time volume graphics techniques to the general graphics community using commodity graphics hardware. Real-time texturing of volumetric data is explored as an important set of techniques in delivering volume datasets as a general graphics primitive. The main contributions of this work are; The introduction of efficient acceleration techniques; Interactive display of amorphous phenomena modelled outside an object defined in a volume dataset; Interactive procedural texture synthesis for volume data; 2D texturing techniques and extensions for volume data in real-time; A flexible surface detail mapping algorithm that removes many previous restrictions Parts of this work have been presented at the 4th International Workshop on Volume Graphics and also published in Volume Graphics 2005

    Video Processing Acceleration using Reconfigurable Logic and Graphics Processors

    No full text
    A vexing question is `which architecture will prevail as the core feature of the next state of the art video processing system?' This thesis examines the substitutive and collaborative use of the two alternatives of the reconfigurable logic and graphics processor architectures. A structured approach to executing architecture comparison is presented - this includes a proposed `Three Axes of Algorithm Characterisation' scheme and a formulation of perfor- mance drivers. The approach is an appealing platform for clearly defining the problem, assumptions and results of a comparison. In this work it is used to resolve the advanta- geous factors of the graphics processor and reconfigurable logic for video processing, and the conditions determining which one is superior. The comparison results prompt the exploration of the customisable options for the graphics processor architecture. To clearly define the architectural design space, the graphics processor is first identifed as part of a wider scope of homogeneous multi-processing element (HoMPE) architectures. A novel exploration tool is described which is suited to the investigation of the customisable op- tions of HoMPE architectures. The tool adopts a systematic exploration approach and a high-level parameterisable system model, and is used to explore pre- and post-fabrication customisable options for the graphics processor. A positive result of the exploration is the proposal of a reconfigurable engine for data access (REDA) to optimise graphics processor performance for video processing-specific memory access patterns. REDA demonstrates the viability of the use of reconfigurable logic as collaborative `glue logic' in the graphics processor architecture

    Towards a High Quality Real-Time Graphics Pipeline

    Get PDF
    Modern graphics hardware pipelines create photorealistic images with high geometric complexity in real time. The quality is constantly improving and advanced techniques from feature film visual effects, such as high dynamic range images and support for higher-order surface primitives, have recently been adopted. Visual effect techniques have large computational costs and significant memory bandwidth usage. In this thesis, we identify three problem areas and propose new algorithms that increase the performance of a set of computer graphics techniques. Our main focus is on efficient algorithms for the real-time graphics pipeline, but parts of our research are equally applicable to offline rendering. Our first focus is texture compression, which is a technique to reduce the memory bandwidth usage. The core idea is to store images in small compressed blocks which are sent over the memory bus and are decompressed on-the-fly when accessed. We present compression algorithms for two types of texture formats. High dynamic range images capture environment lighting with luminance differences over a wide intensity range. Normal maps store perturbation vectors for local surface normals, and give the illusion of high geometric surface detail. Our compression formats are tailored to these texture types and have compression ratios of 6:1, high visual fidelity, and low-cost decompression logic. Our second focus is tessellation culling. Culling is a commonly used technique in computer graphics for removing work that does not contribute to the final image, such as completely hidden geometry. By discarding rendering primitives from further processing, substantial arithmetic computations and memory bandwidth can be saved. Modern graphics processing units include flexible tessellation stages, where rendering primitives are subdivided for increased geometric detail. Images with highly detailed models can be synthesized, but the incurred cost is significant. We have devised a simple remapping technique that allowsfor better tessellation distribution in screen space. Furthermore, we present programmable tessellation culling, where bounding volumes for displaced geometry are computed and used to conservatively test if a primitive can be discarded before tessellation. We introduce a general tessellation culling framework, and an optimized algorithm for rendering of displaced BĂ©zier patches, which is expected to be a common use case for graphics hardware tessellation. Our third and final focus is forward-looking, and relates to efficient algorithms for stochastic rasterization, a rendering technique where camera effects such as depth of field and motion blur can be faithfully simulated. We extend a graphics pipeline with stochastic rasterization in spatio-temporal space and show that stochastic motion blur can be rendered with rather modest pipeline modifications. Furthermore, backface culling algorithms for motion blur and depth of field rendering are presented, which are directly applicable to stochastic rasterization. Hopefully, our work in this field brings us closer to high quality real-time stochastic rendering

    Efficient and High-Quality Rendering of Higher-Order Geometric Data Representations

    Get PDF
    Computer-Aided Design (CAD) bezeichnet den Entwurf industrieller Produkte mit Hilfe von virtuellen 3D Modellen. Ein CAD-Modell besteht aus parametrischen Kurven und FlĂ€chen, in den meisten FĂ€llen non-uniform rational B-Splines (NURBS). Diese mathematische Beschreibung wird ebenfalls zur Analyse, Optimierung und PrĂ€sentation des Modells verwendet. In jeder dieser Entwicklungsphasen wird eine unterschiedliche visuelle Darstellung benötigt, um den entsprechenden Nutzern ein geeignetes Feedback zu geben. Designer bevorzugen beispielsweise illustrative oder realistische Darstellungen, Ingenieure benötigen eine verstĂ€ndliche Visualisierung der Simulationsergebnisse, wĂ€hrend eine immersive 3D Darstellung bei einer Benutzbarkeitsanalyse oder der Designauswahl hilfreich sein kann. Die interaktive Darstellung von NURBS-Modellen und -Simulationsdaten ist jedoch aufgrund des hohen Rechenaufwandes und der eingeschrĂ€nkten HardwareunterstĂŒtzung eine große Herausforderung. Diese Arbeit stellt vier neuartige Verfahren vor, welche sich mit der interaktiven Darstellung von NURBS-Modellen und Simulationensdaten befassen. Die vorgestellten Algorithmen nutzen neue FĂ€higkeiten aktueller Grafikkarten aus, um den Stand der Technik bezĂŒglich QualitĂ€t, Effizienz und Darstellungsgeschwindigkeit zu verbessern. Zwei dieser Verfahren befassen sich mit der direkten Darstellung der parametrischen Beschreibung ohne Approximationen oder zeitaufwĂ€ndige Vorberechnungen. Die dabei vorgestellten Datenstrukturen und Algorithmen ermöglichen die effiziente Unterteilung, Klassifizierung, Tessellierung und Darstellung getrimmter NURBS-FlĂ€chen und einen interaktiven Ray-Casting-Algorithmus fĂŒr die IsoflĂ€chenvisualisierung von NURBSbasierten isogeometrischen Analysen. Die weiteren zwei Verfahren beschreiben zum einen das vielseitige Konzept der programmierbaren Transparenz fĂŒr illustrative und verstĂ€ndliche Visualisierungen tiefenkomplexer CAD-Modelle und zum anderen eine neue hybride Methode zur Reprojektion halbtransparenter und undurchsichtiger Bildinformation fĂŒr die Beschleunigung der Erzeugung von stereoskopischen Bildpaaren. Die beiden letztgenannten AnsĂ€tze basieren auf rasterisierter Geometrie und sind somit ebenfalls fĂŒr normale Dreiecksmodelle anwendbar, wodurch die Arbeiten auch einen wichtigen Beitrag in den Bereichen der Computergrafik und der virtuellen RealitĂ€t darstellen. Die Auswertung der Arbeit wurde mit großen, realen NURBS-DatensĂ€tzen durchgefĂŒhrt. Die Resultate zeigen, dass die direkte Darstellung auf Grundlage der parametrischen Beschreibung mit interaktiven Bildwiederholraten und in subpixelgenauer QualitĂ€t möglich ist. Die EinfĂŒhrung programmierbarer Transparenz ermöglicht zudem die Umsetzung kollaborativer 3D Interaktionstechniken fĂŒr die Exploration der Modelle in virtuellenUmgebungen sowie illustrative und verstĂ€ndliche Visualisierungen tiefenkomplexer CAD-Modelle. Die Erzeugung stereoskopischer Bildpaare fĂŒr die interaktive Visualisierung auf 3D Displays konnte beschleunigt werden. Diese messbare Verbesserung wurde zudem im Rahmen einer Nutzerstudie als wahrnehmbar und vorteilhaft befunden.In computer-aided design (CAD), industrial products are designed using a virtual 3D model. A CAD model typically consists of curves and surfaces in a parametric representation, in most cases, non-uniform rational B-splines (NURBS). The same representation is also used for the analysis, optimization and presentation of the model. In each phase of this process, different visualizations are required to provide an appropriate user feedback. Designers work with illustrative and realistic renderings, engineers need a comprehensible visualization of the simulation results, and usability studies or product presentations benefit from using a 3D display. However, the interactive visualization of NURBS models and corresponding physical simulations is a challenging task because of the computational complexity and the limited graphics hardware support. This thesis proposes four novel rendering approaches that improve the interactive visualization of CAD models and their analysis. The presented algorithms exploit latest graphics hardware capabilities to advance the state-of-the-art in terms of quality, efficiency and performance. In particular, two approaches describe the direct rendering of the parametric representation without precomputed approximations and timeconsuming pre-processing steps. New data structures and algorithms are presented for the efficient partition, classification, tessellation, and rendering of trimmed NURBS surfaces as well as the first direct isosurface ray-casting approach for NURBS-based isogeometric analysis. The other two approaches introduce the versatile concept of programmable order-independent semi-transparency for the illustrative and comprehensible visualization of depth-complex CAD models, and a novel method for the hybrid reprojection of opaque and semi-transparent image information to accelerate stereoscopic rendering. Both approaches are also applicable to standard polygonal geometry which contributes to the computer graphics and virtual reality research communities. The evaluation is based on real-world NURBS-based models and simulation data. The results show that rendering can be performed directly on the underlying parametric representation with interactive frame rates and subpixel-precise image results. The computational costs of additional visualization effects, such as semi-transparency and stereoscopic rendering, are reduced to maintain interactive frame rates. The benefit of this performance gain was confirmed by quantitative measurements and a pilot user study

    NPC AI System Based on Gameplay Recordings

    Get PDF
    HĂ€sti optimeeritud mitte-mĂ€ngija tegelased (MMT) on vastaste vĂ”i meeskonna kaaslastena ĂŒheks peamiseks osaks mitme mĂ€ngija mĂ€ngudes. Enamus mĂ€nguroboteid on ehitatud jĂ€ikade sĂŒsteemide peal, mis vĂ”imaldavad vaid loetud arvu otsuseid ja animatsioone. Kogenud mĂ€ngijad suudavad eristada mĂ€nguroboteid inimmĂ€ngijatest ning ette ennustada nende liigutusi ja strateegiaid. See alandab mĂ€ngukogemuse kvaliteeti. SeetĂ”ttu, eelistavad mitme mĂ€ngijaga mĂ€ngude mĂ€ngijad mĂ€ngida pigem inimmĂ€ngijate kui MMTde vastu. Virtuaalreaalsuse (VR) mĂ€ngud ja VR mĂ€ngijad on siiani veel vĂ€ike osa mĂ€ngutööstusest ja mitme mĂ€ngija VR mĂ€ngud kannatavad mĂ€ngijabaasi kaotusest, kui mĂ€nguomanikud ei suuda leida teisi mĂ€ngijaid, kellega mĂ€ngida. See uurimus demonstreerib mĂ€ngulindistustel pĂ”hineva tehisintellekt (TI) sĂŒsteemi rakendatavust VR esimese isiku vaates tulistamismĂ€ngule Vrena. TeemamĂ€ng kasutab ebatavalist liikumisesĂŒsteemi, milles mĂ€ngijad liiguvad otsiankrute abil. VR mĂ€ngijate liigutuste imiteerimiseks loodi AI sĂŒsteem, mis kasutab mĂ€ngulindistusi navigeerimisandmetena. SĂŒsteem koosneb kolmest peamisest funktsionaalsusest. Need funktsionaalsused on mĂ€ngutegevuse lindistamine, andmete töötlemine ja navigeerimine. MĂ€ngu keskkond on tĂŒkeldatud kuubikujulisteks sektoriteks, et vĂ€hendada erinevate asukohal pĂ”hinevate olekute arvu ning mĂ€ngutegevus on lindistatud ajaintervallide ja tegevuste pĂ”hjal. Loodud mĂ€ngulogid on segmenteeritud logilĂ”ikudeks ning logilĂ”ikude abil on loodud otsingutabel. Otsingutabelit kasutatakse MMT agentide navigeerimiseks ning MMTde otsuste langetamise mehanism jĂ€ljendab olek-tegevus-tasu kontseptsiooni. Loodud töövahendi kvaliteeti hinnati uuringu pĂ”hjal, millest saadi mĂ€rkimisvÀÀrset tagasisidet sĂŒsteemi tĂ€iustamiseks.A well optimized Non-Player Character (NPC) as an opponent or a teammate is a major part of the multiplayer games. Most of the game bots are built upon a rigid system with numbered decisions and animations. Experienced players can distinguish bots from hu-man players and they can predict bot movements and strategies. This reduces the quality of the gameplay experience. Therefore, multiplayer game players favour playing against human players rather than NPCs. VR game market and VR gamers are still a small frac-tion of the game industry and multiplayer VR games suffer from loss of their player base if the game owners cannot find other players to play with. This study demonstrates the applicability of an Artificial Intelligence (AI) system based on gameplay recordings for a Virtual Reality (VR) First-person Shooter (FPS) game called Vrena. The subject game has an uncommon way of movement, in which the players use grappling hooks to navigate. To imitate VR players’ movements and gestures an AI system is developed which uses gameplay recordings as navigation data. The system contains three major functionality. These functionalities are gameplay recording, data refinement, and navigation. The game environment is sliced into cubic sectors to reduce the number of positional states and gameplay is recorded by time intervals and actions. Produced game logs are segmented into log sections and these log sections are used for creating a look-up table. The lookup table is used for navigating the NPC agent and the decision mechanism followed a way similar to the state-action-reward concept. The success of the developed tool is tested via a survey, which provided substantial feedback for improving the system
    • 

    corecore