2 research outputs found

    Rendering 3D volumes using per-pixel displacement mapping

    No full text
    Rendering 3D Volumes Using Per-Pixel Displacement Mapping offers a simple and practical solution to the problem of seamlessly integrating many highly detailed 3D objects into a scene without the need to render large sets of polygons or introduce the overhead of an obtrusive scene-graph. This work takes advantage of modern programmable GPU\u27s as well as recent related research in the area of per-pixel displacement mapping to achieve view independent fully 3D rendering with per-pixel level of detail. To achieve this, a box is used to bound texture-defined volumes. The box acts as a surface to which the volume will be drawn on. By computing a viewing ray from the camera to a point on the box and using that point as a ray origin, the correct intersection with the texture volume can be found using various per-pixel displacement mapping techniques. Once the correct intersection is found, the final color value for the corresponding point on the box can be computed. The technique supports various effects taken both from established raycasting and ray-tracing methods such as reflection, refraction, selfshadowing on models, a simple animation scheme and an efficient method for finding distances through volumes. Copyright © 2007 by the Association for Computing Machinery, Inc

    Rendering 3D Volumes Using Per-Pixel Displacement Mapping

    No full text
    Rendering 3D Volumes Using Per-Pixel Displacement Mapping offers a simple and practical solution to the problem of seamlessly integrating many highly detailed 3D objects into a scene without the need to render large sets of polygons or introduce the overhead of an obtrusive scene-graph. This work takes advantage of modern programmable GPU\u27s as well as recent related research in the area of per-pixel displacement mapping to achieve view independent fully 3D rendering with per-pixel level of detail. To achieve this, a box is used to bound texture-defined volumes. The box acts as a surface to which the volume will be drawn on. By computing a viewing ray from the camera to a point on the box and using that point as a ray origin, the correct intersection with the texture volume can be found using various per-pixel displacement mapping techniques. Once the correct intersection is found, the final color value for the corresponding point on the box can be computed. The technique supports various effects taken both from established raycasting and ray-tracing methods such as reflection, refraction, selfshadowing on models, a simple animation scheme and an efficient method for finding distances through volumes. Copyright © 2007 by the Association for Computing Machinery, Inc
    corecore