14 research outputs found

    Hierarchical N-Body problem on graphics processor unit

    Get PDF
    Galactic simulation is an important cosmological computation, and represents a classical N-body problem suitable for implementation on vector processors. Barnes-Hut algorithm is a hierarchical N-Body method used to simulate such galactic evolution systems. Stream processing architectures expose data locality and concurrency available in multimedia applications. On the other hand, there are numerous compute-intensive scientific or engineering applications that can potentially benefit from such computational and communication models. These applications are traditionally implemented on vector processors. Stream architecture based graphics processor units (GPUs) present a novel computational alternative for efficiently implementing such high-performance applications. Rendering on a stream architecture sustains high performance, while user-programmable modules allow implementing complex algorithms efficiently. GPUs have evolved over the years, from being fixed-function pipelines to user programmable processors. In this thesis, we focus on the implementation of Barnes-Hut algorithm on typical current-generation programmable GPUs. We exploit computation and communication requirements present in Barnes-Hut algorithm to expose their suitability for user-programmable GPUs. Our implementation of the Barnes-Hut algorithm is formulated as a fragment shader targeting the selected GPU. We discuss implementation details, design issues, results, and challenges encountered in programming the fragment shader

    vorgelegt von

    Get PDF
    Prof. Dr. N. NavabTo my familyAcknowledgements I am deeply grateful that I had the opportunity to write this thesis while working at the Chair for Pattern Recognition within the project B6 of the Sonderforschungsbereich 603 (funded by Deutsche Forschungsgemeinschaft). Many people contributed to this work and I want to express my gratitude to all of them

    Scalable Interactive Volume Rendering Using Off-the-shelf Components

    Get PDF
    This paper describes an application of a second generation implementation of the Sepia architecture (Sepia-2) to interactive volu-metric visualization of large rectilinear scalar fields. By employingpipelined associative blending operators in a sort-last configuration a demonstration system with 8 rendering computers sustains 24 to 28 frames per second while interactively rendering large data volumes (1024x256x256 voxels, and 512x512x512 voxels). We believe interactive performance at these frame rates and data sizes is unprecedented. We also believe these results can be extended to other types of structured and unstructured grids and a variety of GL rendering techniques including surface rendering and shadow map-ping. We show how to extend our single-stage crossbar demonstration system to multi-stage networks in order to support much larger data sizes and higher image resolutions. This requires solving a dynamic mapping problem for a class of blending operators that includes Porter-Duff compositing operators

    Parallel Rendering and Large Data Visualization

    Full text link
    We are living in the big data age: An ever increasing amount of data is being produced through data acquisition and computer simulations. While large scale analysis and simulations have received significant attention for cloud and high-performance computing, software to efficiently visualise large data sets is struggling to keep up. Visualization has proven to be an efficient tool for understanding data, in particular visual analysis is a powerful tool to gain intuitive insight into the spatial structure and relations of 3D data sets. Large-scale visualization setups are becoming ever more affordable, and high-resolution tiled display walls are in reach even for small institutions. Virtual reality has arrived in the consumer space, making it accessible to a large audience. This thesis addresses these developments by advancing the field of parallel rendering. We formalise the design of system software for large data visualization through parallel rendering, provide a reference implementation of a parallel rendering framework, introduce novel algorithms to accelerate the rendering of large amounts of data, and validate this research and development with new applications for large data visualization. Applications built using our framework enable domain scientists and large data engineers to better extract meaning from their data, making it feasible to explore more data and enabling the use of high-fidelity visualization installations to see more detail of the data.Comment: PhD thesi

    Exploiting the GPU power for intensive geometric and imaging data computation.

    Get PDF
    Wang Jianqing.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 81-86).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Overview --- p.1Chapter 1.2 --- Thesis --- p.3Chapter 1.3 --- Contributions --- p.4Chapter 1.4 --- Organization --- p.6Chapter 2 --- Programmable Graphics Hardware --- p.8Chapter 2.1 --- Introduction --- p.8Chapter 2.2 --- Why Use GPU? --- p.9Chapter 2.3 --- Programmable Graphics Hardware Architecture --- p.11Chapter 2.4 --- Previous Work on GPU Computation --- p.15Chapter 3 --- Multilingual Virtual Performer --- p.17Chapter 3.1 --- Overview --- p.17Chapter 3.2 --- Previous Work --- p.18Chapter 3.3 --- System Overview --- p.20Chapter 3.4 --- Facial Animation --- p.22Chapter 3.4.1 --- Facial Animation using Face Space --- p.23Chapter 3.4.2 --- Face Set Selection for Lip Synchronization --- p.27Chapter 3.4.3 --- The Blending Weight Function Generation and Coartic- ulation --- p.33Chapter 3.4.4 --- Expression Overlay --- p.38Chapter 3.4.5 --- GPU Algorithm --- p.39Chapter 3.5 --- Character Animation --- p.44Chapter 3.5.1 --- Skeletal Animation Primer --- p.44Chapter 3.5.2 --- Mathematics of Kinematics --- p.46Chapter 3.5.3 --- Animating with Motion Capture Data --- p.48Chapter 3.5.4 --- Skeletal Subspace Deformation --- p.49Chapter 3.5.5 --- GPU Algorithm --- p.50Chapter 3.6 --- Integration of Skeletal and Facial Animation --- p.52Chapter 3.7 --- Result --- p.53Chapter 3.7.1 --- Summary --- p.58Chapter 4 --- Discrete Wavelet Transform On GPU --- p.60Chapter 4.1 --- Introduction --- p.60Chapter 4.1.1 --- Previous Works --- p.61Chapter 4.1.2 --- Our Solution --- p.61Chapter 4.2 --- Multiresolution Analysis with Wavelets --- p.62Chapter 4.3 --- Fragment Processor for Pixel Processing --- p.64Chapter 4.4 --- DWT Pipeline --- p.65Chapter 4.4.1 --- Convolution Versus Lifting --- p.65Chapter 4.4.2 --- DWT Pipeline --- p.67Chapter 4.5 --- Forward DWT --- p.68Chapter 4.6 --- Inverse DWT --- p.71Chapter 4.7 --- Results and Applications --- p.73Chapter 4.7.1 --- Geometric Deformation in Wavelet Domain --- p.73Chapter 4.7.2 --- Stylish Image Processing and Texture-illuminance De- coupling --- p.73Chapter 4.7.3 --- Hardware-Accelerated JPEG2000 Encoding --- p.75Chapter 4.8 --- Web Information --- p.78Chapter 5 --- Conclusion --- p.79Bibliography --- p.8

    Mobile three-dimensional city maps

    Get PDF
    Maps are visual representations of environments and the objects within, depicting their spatial relations. They are mainly used in navigation, where they act as external information sources, supporting observation and decision making processes. Map design, or the art-science of cartography, has led to simplification of the environment, where the naturally three-dimensional environment has been abstracted to a two-dimensional representation, populated with simple geometrical shapes and symbols. However, abstract representation requires a map reading ability. Modern technology has reached the level where maps can be expressed in digital form, having selectable, scalable, browsable and updatable content. Maps may no longer even be limited to two dimensions, nor to an abstract form. When a real world based virtual environment is created, a 3D map is born. Given a realistic representation, would the user no longer need to interpret the map, and be able to navigate in an inherently intuitive manner? To answer this question, one needs a mobile test platform. But can a 3D map, a resource hungry real virtual environment, exist on such resource limited devices? This dissertation approaches the technical challenges posed by mobile 3D maps in a constructive manner, identifying the problems, developing solutions and providing answers by creating a functional system. The case focuses on urban environments. First, optimization methods for rendering large, static 3D city models are researched and a solution provided by combining visibility culling, level-of-detail management and out-of-core rendering, suited for mobile 3D maps. Then, the potential of mobile networking is addressed, developing efficient and scalable methods for progressive content downloading and dynamic entity management. Finally, a 3D navigation interface is developed for mobile devices, and the research validated with measurements and field experiments. It is found that near realistic mobile 3D city maps can exist in current mobile phones, and the rendering rates are excellent in 3D hardware enabled devices. Such 3D maps can also be transferred and rendered on-the-fly sufficiently fast for navigation use over cellular networks. Real world entities such as pedestrians or public transportation can be tracked and presented in a scalable manner. Mobile 3D maps are useful for navigation, but their usability depends highly on interaction methods - the potentially intuitive representation does not imply, for example, faster navigation than with a professional 2D street map. In addition, the physical interface limits the usability

    Offset Surface Light Fields

    Get PDF
    For producing realistic images, reflection is an important visual effect. Reflections of the environment are important not only for highly reflective objects, such as mirrors, but also for more common objects such as brushed metals and glossy plastics. Generating these reflections accurately at real-time rates for interactive applications, however, is a difficult problem. Previous works in this area have made assumptions that sacrifice accuracy in order to preserve interactivity. I will present an algorithm that tries to handle reflection accurately in the general case for real-time rendering. The algorithm uses a database of prerendered environment maps to render both the original object itself and an additional bidirectional reflection distribution function (BRDF). The algorithm performs image-based rendering in reflection space in order to achieve accurate results. It also uses graphics processing unit (GPU) features to accelerate rendering

    Proceedings of the Second PHANToM Users Group Workshop : October 19-22, 1997 : Endicott House, Dedham, MA, Massachusetts Institute of Technology, Cambridge, MA

    Get PDF
    "December, 1997." Cover title.Includes bibliographical references.Sponsored by SensAble Technologies, Inc., Cambridge, MA."[edited by J. Kennedy Salisbury and Mandayam A. Srinivasan]
    corecore