30 research outputs found

    An Integral geometry based method for fast form-factor computation

    Get PDF
    Monte Carlo techniques have been widely used in rendering algorithms for local integration. For example, to compute the contribution of a patch to the luminance of another. In the present paper we propose an algorithm based on Integral geometry where Monte Carlo is applied globally. We give some results of the implementation to validate the proposition and we study the error of the technique, as well as its complexity.Postprint (published version

    Point-based modeling from a single image

    No full text
    The complexity of virtual environments has grown spectacularly over the recent years, mainly thanks to the use of the currently cheap high performance graphics cards. As the graphics cards improve the performance and the geometry complexity grows, many of the objects present in the scene only project to a few pixels on the screen. This represents a waste in computing effort for the transforming and clipping of maybe a lot of polygons that could be substituted by a simple point or a small set of points. Recently, efficient rendering algorithms for point models have been proposed. However, little attention has been focused on building a point-based modeler, using the advantages that such a representation can provide. In this paper we present a modeler that can generate 3D geometry from an image, completely built on points. It takes as input an image and creates a point-based representation from it. Then, a set of operators allow to modify the geometry in order to produce 3D geometry from the image. With our system it is possible to generate in short time complex geometries that would be difficult to model with a polygon-based modeler.Postprint (published version

    An Integral geometry based method for fast form-factor computation

    No full text
    Monte Carlo techniques have been widely used in rendering algorithms for local integration. For example, to compute the contribution of a patch to the luminance of another. In the present paper we propose an algorithm based on Integral geometry where Monte Carlo is applied globally. We give some results of the implementation to validate the proposition and we study the error of the technique, as well as its complexity

    A Global algorithm for linear radiosity

    No full text
    A linear algorithm for radiosity is presented, linear both in time and storage. The new algorithm is based on previous work by the authors and on the well known algorithms for progressive radiosity and Monte Carlo particle transport.Postprint (published version

    A Global algorithm for linear radiosity

    No full text
    A linear algorithm for radiosity is presented, linear both in time and storage. The new algorithm is based on previous work by the authors and on the well known algorithms for progressive radiosity and Monte Carlo particle transport

    Bandwidth reduction techniques for remote navigation systems

    No full text
    In this paper we explore a set of techniques to reduce the bandwidth in remote navigation systems. These systems, such as exploration of virtual 3D worlds or remote surgery, usually require higher bandwidth than the common Internet connection available at home. Our system consists in a client PC equipped with a graphics card, and a remote high-end server, which hosts the remote environment and serves information for several clients. Each time the client needs a frame, the new image is predicted by both the client and the server and the difference with the exact one is sent to the client. To reduce bandwidth we improve the prediction method by exploiting spatial coherence and wiping out correct pixels from the difference image. This way we achieve up to 9:1 reduction ratios without loss of quality. These methods can be applied to head-mounted displays or any remote navigation software

    Bandwidth reduction for remote navigation systems through view prediction and progressive transmission

    No full text
    Remote navigation systems, such as exploration of virtual 3D worlds or remote surgery, usually require higher bandwidth than the Internet connection commonly available at home. In this paper,we explore a set of techniques to reduce the bandwidth required by these applications. Our system consists of a client PC equipped with a graphics card, and a remote high-end server. The server hosts the remote environment and does the actual rendering of the scenes for several clients, and the new image is passed to them. This scheme is suitable when the data has a copyright or when its size may exceed the rendering capabilities of the client. The general scheme is the following: each time the position changes, the newviewis predicted by both the client and the server and the difference information between the predicted view and the correct one is sent to the client. To reduce bandwidth we can improve the prediction method, and the transmission system. We present here two groups of techniques: First, a set of lossless methods which achieve reductions of up to a 9:1 ratio. These are a combination of a two-level forward warping, that takes advantage of spatial coherence, and a masking method, which allows to transmit only the information that really needs to be updated. Second, a set of lossy methods suitable for very low bandwidth environments which involve both progressive transmission and image reuse. They consider relevant parameters such as the number of pixels, the amount of information they provide, and their colour deviation in order to create a strategy for prioritizing the information transmission. This system allows to improve up to an additional 4:1 ratio. The quality of the generated images is very high, and often indistinguishable from the correct ones.Postprint (published version

    On the fly best view detection using graphics hardware

    No full text
    Selection of good camera positions has many applications in Computer Graphics. It can be used to compute a walkthrough inside a scene that shows a higher amount of information or to select a minimal set of views for Image- Based Rendering. However, the selection of a good view is costly, even when using OpenGL for fast rendering, as it is necessary to analyze a high amount of camera positions. In this paper we show how histograms, an OpenGL extension available in current graphics hardware, may be used together with an adaptive algorithm in order to obtain on the fly best views for objects of moderate complexity (several thousands of polygons).Postprint (published version

    Automatic indoor scene exploration

    No full text
    Automatic computation of best views of objects are very useful. For example, they can be used as the starting point of a scene exploration, or to enrich galleries of objects available through Internet by adding an image a model that may help to decide if it is worth downloading. Recently, a measure to evaluate the quality of a view has been proposed, it is called viewpoint entropy and has a basis on Information Theory. The best view is the one which gives the most information of the object being inspected. For large models, the selection of a set of good views that cover all the faces can aid the user to understand a certain object or scene. For very complex environments, however, a set of images may not suffice. For example, when examining buildings it may be difficult to locate in space the positions where the viewpoints were placed. In these cases, an interactive exploration of the model can better help the user to understand the structure of the scene. In this paper we present an automatic method for the exploration of scenes that uses viewpoint entropy.Peer Reviewe

    On the fly best view detection using graphics hardware

    No full text
    Selection of good camera positions has many applications in Computer Graphics. It can be used to compute a walkthrough inside a scene that shows a higher amount of information or to select a minimal set of views for Image- Based Rendering. However, the selection of a good view is costly, even when using OpenGL for fast rendering, as it is necessary to analyze a high amount of camera positions. In this paper we show how histograms, an OpenGL extension available in current graphics hardware, may be used together with an adaptive algorithm in order to obtain on the fly best views for objects of moderate complexity (several thousands of polygons)
    corecore