919 research outputs found

    Hierarchical N-Body problem on graphics processor unit

    Get PDF
    Galactic simulation is an important cosmological computation, and represents a classical N-body problem suitable for implementation on vector processors. Barnes-Hut algorithm is a hierarchical N-Body method used to simulate such galactic evolution systems. Stream processing architectures expose data locality and concurrency available in multimedia applications. On the other hand, there are numerous compute-intensive scientific or engineering applications that can potentially benefit from such computational and communication models. These applications are traditionally implemented on vector processors. Stream architecture based graphics processor units (GPUs) present a novel computational alternative for efficiently implementing such high-performance applications. Rendering on a stream architecture sustains high performance, while user-programmable modules allow implementing complex algorithms efficiently. GPUs have evolved over the years, from being fixed-function pipelines to user programmable processors. In this thesis, we focus on the implementation of Barnes-Hut algorithm on typical current-generation programmable GPUs. We exploit computation and communication requirements present in Barnes-Hut algorithm to expose their suitability for user-programmable GPUs. Our implementation of the Barnes-Hut algorithm is formulated as a fragment shader targeting the selected GPU. We discuss implementation details, design issues, results, and challenges encountered in programming the fragment shader

    Implicit Decals: Interactive Editing of Repetitive Patterns on Surfaces

    Get PDF
    11 pagesInternational audienceTexture mapping is an essential component for creating 3D models and is widely used in both the game and the movie industries. Creating texture maps has always been a complex task and existing methods carefully balance flexibility with ease of use. One difficulty in using texturing is the repeated placement of individual textures over larger areas. In this paper we propose a method which uses decals to place images onto a model. Our method allows the decals to compete for space and to deform as they are being pushed by other decals. A spherical field function is used to determine the position and the size of each decal and the deformation applied to fit the decals. The decals may span multiple objects with heterogeneous representations. Our method does not require an explicit parameterization of the model. As such, varieties of patterns including repeated patterns like rocks, tiles, and scales can be mapped. We have implemented the method using the GPU where placement, size, and orientation of thousands of decals are manipulated in real time

    Real-time High Resolution Fusion of Depth Maps on GPU

    Full text link
    A system for live high quality surface reconstruction using a single moving depth camera on a commodity hardware is presented. High accuracy and real-time frame rate is achieved by utilizing graphics hardware computing capabilities via OpenCL and by using sparse data structure for volumetric surface representation. Depth sensor pose is estimated by combining serial texture registration algorithm with iterative closest points algorithm (ICP) aligning obtained depth map to the estimated scene model. Aligned surface is then fused into the scene. Kalman filter is used to improve fusion quality. Truncated signed distance function (TSDF) stored as block-based sparse buffer is used to represent surface. Use of sparse data structure greatly increases accuracy of scanned surfaces and maximum scanning area. Traditional GPU implementation of volumetric rendering and fusion algorithms were modified to exploit sparsity to achieve desired performance. Incorporation of texture registration for sensor pose estimation and Kalman filter for measurement integration improved accuracy and robustness of scanning process

    Multiple dataset visualization (MDV) framework for scalar volume data

    Get PDF
    Many applications require comparative analysis of multiple datasets representing different samples, conditions, time instants, or views in order to develop a better understanding of the scientific problem/system under consideration. One effective approach for such analysis is visualization of the data. In this PhD thesis, we propose an innovative multiple dataset visualization (MDV) approach in which two or more datasets of a given type are rendered concurrently in the same visualization. MDV is an important concept for the cases where it is not possible to make an inference based on one dataset, and comparisons between many datasets are required to reveal cross-correlations among them. The proposed MDV framework, which deals with some fundamental issues that arise when several datasets are visualized together, follows a multithreaded architecture consisting of three core components, data preparation/loading, visualization and rendering. The visualization module - the major focus of this study, currently deals with isosurface extraction and texture-based rendering techniques. For isosurface extraction, our all-in-memory approach keeps datasets under consideration and the corresponding geometric data in the memory. Alternatively, the only-polygons- or points-in-memory only keeps the geometric data in memory. To address the issues related to storage and computation, we develop adaptive data coherency and multiresolution schemes. The inter-dataset coherency scheme exploits the similarities among datasets to approximate the portions of isosurfaces of datasets using the isosurface of one or more reference datasets whereas the intra/inter-dataset multiresolution scheme processes the selected portions of each data volume at varying levels of resolution. The graphics hardware-accelerated approaches adopted for MDV include volume clipping, isosurface extraction and volume rendering, which use 3D textures and advanced per fragment operations. With appropriate user-defined threshold criteria, we find that various MDV techniques maintain a linear time-N relationship, improve the geometry generation and rendering time, and increase the maximum N that can be handled (N: number of datasets). Finally, we justify the effectiveness and usefulness of the proposed MDV by visualizing 3D scalar data (representing electron density distributions in magnesium oxide and magnesium silicate) from parallel quantum mechanical simulation

    Registration using Graphics Processor Unit

    Get PDF
    Data point set registration is an important operation in coordinate metrology. Registration is the operation by which sampled point clouds are aligned with a CAD model by a 4X4 homogeneous transformation (e.g., rotation and translation). This alignment permits validation of the produced artifact\u27s geometry. State-of-the-art metrology systems are now capable of generating thousands, if not millions, of data points during an inspection operation, resulting in increased computational power to fully utilize these larger data sets. The registration process is an iterative nonlinear optimization operation having an execution time directly related to the number of points processed and CAD model complexity. The objective function to be minimized by this optimization is the sum of the square distances between each point in the point cloud and the closest surface in the CAD model. A brute force approach to registration, which is often used, is to compute the minimum distance between each point and each surface in the CAD model. As point cloud sizes and CAD model complexity increase, this approach becomes intractable and inefficient. Highly efficient numerical and analytical gradient based algorithms exist and their goal is to convergence to an optimal solution in minimum time. This thesis presents a new approach to efficiently perform the registration process by employing readily available computer hardware, the graphical processor unit (GPU). The data point set registration time for the GPU shows a significant improvement (around 15-20 times) over typical CPU performance. Efficient GPU programming decreases the complexity of the steps and improves the rate of convergence of the existing algorithms. The experimental setup reveals the exponential increasing nature of the CPU and the linear performance of the GPU in various aspects of an algorithm. The importance of CPU in the GPU programming is highlighted. The future implementations disclose the possible extensions of a GPU for higher order and complex coordinate metrology algorithms

    Tessellated Voxelization for Global Illumination using Voxel Cone Tracing

    Get PDF
    Modeling believable lighting is a crucial component of computer graphics applications, including games and modeling programs. Physically accurate lighting is complex and is not currently feasible to compute in real-time situations. Therefore, much research is focused on investigating efficient ways to approximate light behavior within these real-time constraints. In this thesis, we implement a general purpose algorithm for real-time applications to approximate indirect lighting. Based on voxel cone tracing, we use a filtered representation of a scene to efficiently sample ambient light at each point in the scene. We present an approach to scene voxelization using hardware tessellation and compare it with an approach utilizing hardware rasterization. We also investigate possible methods of warped voxelization. Our contributions include a complete and open-source implementation of voxel cone tracing along with both voxelization algorithms. We find similar performance and quality with both voxelization algorithms
    • …
    corecore