11 research outputs found

    In Proceedings of SIGGRAPH 2004, Sketches and Applications track

    No full text
    Introduction Traditionally, displacement maps have been rendered with micropolygons [Cook et al. 1987]. In both raytracers and real-time systems, these high polygon counts lead to memory/bandwidth inefficiency, and high geometric transformation costs, which limit performance. Recently, displacement maps have also been directly rendered in raytracing, using iterative root-finding methods [Heidrich and Seidel 1998]. Here, we propose a similar ray intersection approach for real-time rendering. However, an iterative solution is infeasible in graphics hardware due to the limitations on texture indirections. A texture indirection occurs when the results of one texture access affect the coordinates of subsequent accesses, such as in each step of an iterative solution. These are currently limited to 4 on Radeon 9700/9800 where in contrast, a possible total of 32 texture accesses are allowed per fragment. This has led us to the hybrid sampling/iterative approach described here 2 Algorithm O

    Dynamic Textures for Image-based Rendering of Fine-Scale 3D Structure and Animation of Non-rigid Motion

    No full text
    The problem of capturing real world scenes and then accurately rendering them is particularly difficult for finescale 3D structure. Similarly, it is difficult to capture, model and animate non-rigid motion. We present a method where small image changes are captured as a time varying (dynamic) texture. In particular, a coarse geometry is obtained from a sample set of images using structure from motion. This geometry is then used to subdivide the scene and to extract approximately stabilized texture patches. The residual statistical variability in the texture patches is captured using a PCA basis of spatial filters. The filters coefficients are parameterized in camera pose and object motion. To render new poses and motions, new texture patches are synthesized by modulating the texture basis. The texture is then warped back onto the coarse geometry. We demonstrate how the texture modulation and projective homography-based warps can be achieved in real-time using hardware accelerated OpenGL

    Editing real world scenes: Augmented reality with image-based rendering

    No full text
    We present a method that using only an uncalibrated camera allows the capture of object geometry and appearance, and then at a later stage registration and AR overlay into a new scene. Using only image information first a coarse object geometry is obtained using structure-from-motion, then a dynamic, view dependent texture is estimated to account for the differences between the reprojected coarse model and the training images. In AR rendering, the object structure is interactively aligned in one frame by the user, object and scene structure is registered, and rendered in subsequent frames by a virtual scene camera, with parameters estimated from real-time visual tracking. Using the same viewing geometry for both object acquisition, registration, and rendering ensures consistency and minimizes errors

    Image-based Rendering using Hardware Accelerated Dynamic Textures

    No full text
    With recent improvements in consumer graphics hardware, image-based rendering in real-time is possible by modulating (blending) a large basis of transparent textures. We make efficient use of this by developing a two stage model, where a high quality rendering is achieved by combining an approximate geometric model with a time varying dynamic texture blended from the basis. The dynamic texture compensates for the inaccuracies in the approximate geometry by encoding the resulting texture intensity errors in a way similar to in mpeg movie compression, but here parameterizing the variability in pose instead of time, hence allowing the interpolation of arbitrary views. Additionally, we show how this model can be captured from uncalibrated images using an ieee1394 digital web-cam and real-time tracking. We show experiments of capturing and rendering everyday objects such as flowers and houses

    Modulating View-dependent Textures

    No full text
    We present a texturing approach for image-based modeling and rendering, where instead of using one (or a blend of a few) sample images, new view dependent textures are synthesized by modulating a differential texture basis

    March 2003

    No full text
    A long standing goal in image-based modeling and rendering is to capture a scene from camera images and construct a su#cient model to allow photo-realistic rendering of new views. With the confluence of computer graphics and vision, the combination of research on recovering geometric structure from un-calibrated cameras with modeling and rendering has yielded numerous new methods. Yet, many challenging issues remain to be addressed before a su#ciently general and robust system could be built to e.g. allow an average user to model their home and garden from cam-corder video
    corecore