8 research outputs found

    Perception and Motion: use of Computer Vision to solve Geometry Processing problems

    Get PDF
    Computer vision and geometry processing are often see as two different and, in a certain sense, distant fields: the first one works on two-dimensional data, while the other needs three dimensional information. But are 2D and 3D data really disconnected? Think about the human vision: each eye captures patterns of light, that are then used by the brain in order to reconstruct the perception of the observed scene. In a similar way, if the eye detects a variation in the patterns of light, we are able to understand that the scene is not static; therefore, we're able to perceive the motion of one or more object in the scene. In this work, we'll show how the perception of the 2D motion can be used in order to solve two significant problems, both dealing with three-dimensional data. In the first part, we'll show how the so-called optical flow, representing the observed motion, can be used to estimate the alignment error of a set of digital cameras looking to the same object. In the second part, we'll see how the detected 2D motion of an object can be used to better understand its underlying geometric structure by means of detecting its rigid parts and the way they are connected

    Large-scale point-cloud visualization through localized textured surface reconstruction

    Get PDF
    In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets

    Creating 3D models of cultural heritage sites with terrestrial laser scanning and 3D imaging

    Get PDF
    Includes abstract.Includes bibliographical references.The advent of terrestrial laser-scanners made the digital preservation of cultural heritage sites an affordable technique to produce accurate and detailed 3D-computermodel representations for any kind of 3D-objects, such as buildings, infrastructure, and even entire landscapes. However, one of the key issues with this technique is the large amount of recorded points; a problem which was even more intensified by the recent advances in laser-scanning technology, which increased the data acquisition rate from 25 thousand to 1 million points per second. The following research presents a workflow for the processing of large-volume laser-scanning data, with a special focus on the needs of the Zamani initiative. The research project, based at the University of Cape Town, spatially documents African Cultural Heritage sites and Landscapes and produces meshed 3D models, of various, historically important objects, such as fortresses, mosques, churches, castles, palaces, rock art shelters, statues, stelae and even landscapes

    Controlled and adaptive mesh zippering

    No full text
    Merging meshes is a recurrent need in geometry modeling and it is a critical step in the 3D acquisition pipeline, where it is used for building a single mesh from several range scans. A pioneering simple and effective solution to merging is represented by the Zippering algorithm (Turk and Levoy, 1994), which consists of simply stitching the meshes together along their borders. In this paper we propose a new extended version of the zippering algorithm that enables the user to control the resulting mesh by introducing quality criteria in the selection of redundant data, and allows to zip together meshes with different granularity by an ad hoc refinement algorithm

    CONTROLLED AND ADAPTIVE MESH ZIPPERING

    No full text
    Merging meshes is a recurrent need in geometry modeling and it is a critical step in the 3D acquisition pipeline, where it is used for building a single mesh from several range scans. A pioneering simple and effective solution to merging is represented by the Zippering algorithm (Turk and Levoy, 1994), which consists of simply stitching the meshes together along their borders. In this paper we propose a new extended version of the zippering algorithm that enables the user to control the resulting mesh by introducing quality criteria in the selection of redundant data, and allows to zip together meshes with different granularity by an ad hoc refinement algorithm.

    CONTROLLED AND ADAPTIVE MESH ZIPPERING

    No full text
    corecore