18 research outputs found

    Methods for Volumetric Reconstruction of Visual Scenes

    Get PDF
    In this paper, we present methods for 3D volumetric reconstruction of visual scenes photographed by multiple calibrated cameras placed at arbitrary viewpoints. Our goal is to generate a 3D model that can be rendered to synthesize new photo-realistic views of the scene. We improve upon existing voxel coloring/space carving approaches by introducing new ways to compute visibility and photo-consistency, as well as model infinitely large scenes. In particular, we describe a visibility approach that uses all possible color information from the photographs during reconstruction, photo-consistency measures that are more robust and/or require less manual intervention, and a volumetric warping method for application of these reconstruction methods to large-scale scenes

    Creating virtual models from uncalibrated camera views

    Get PDF
    The reconstruction of photorealistic 3D models from camera views is becoming an ubiquitous element in many applications that simulate physical interaction with the real world. In this paper, we present a low-cost, interactive pipeline aimed at non-expert users, that achieves 3D reconstruction from multiple views acquired with a standard digital camera. 3D models are amenable to access through diverse representation modalities that typically imply trade-offs between level of detail, interaction, and computational costs. Our approach allows users to selectively control the complexity of different surface regions, while requiring only simple 2D image editing operations. An initial reconstruction at coarse resolution is followed by an iterative refining of the surface areas corresponding to the selected regions

    Multi-view reconstruction using photo-consistency and exact silhouette constraints: a maximum-flow formulation

    Full text link

    Integral imaging techniques for flexible sensing through image-based reprojection

    Get PDF
    In this work, a 3D reconstruction approach for flexible sensing inspired by integral imaging techniques is proposed. This method allows the application of different integral imaging techniques, such as generating a depth map or the reconstruction of images on a certain 3D plane of the scene that were taken with a set of cameras located at unknown and arbitrary positions and orientations. By means of a photo-consistency measure proposed in this work, all-in-focus images can also be generated by projecting the points of the 3D plane into the sensor planes of the cameras and thereby capturing the associated RGB values. The proposed method obtains consistent results in real scenes with different surfaces of objects as well as changes in texture and lighting

    Determination of volume characteristics of cells from dynamical microscopic image

    Get PDF
    The algorithm for the determination of 3D-characteristics of a dynamic biological object based on the recovery of a stereo pair was proposed. Images for stereo pair construction were obtained via a single camera before and after the displacement of the object. The algorithm is based on the example of living one-celled organisms

    Occlusion-Aware Multi-View Reconstruction of Articulated Objects for Manipulation

    Get PDF
    The goal of this research is to develop algorithms using multiple views to automatically recover complete 3D models of articulated objects in unstructured environments and thereby enable a robotic system to facilitate further manipulation of those objects. First, an algorithm called Procrustes-Lo-RANSAC (PLR) is presented. Structure-from-motion techniques are used to capture 3D point cloud models of an articulated object in two different configurations. Procrustes analysis, combined with a locally optimized RANSAC sampling strategy, facilitates a straightforward geometric approach to recovering the joint axes, as well as classifying them automatically as either revolute or prismatic. The algorithm does not require prior knowledge of the object, nor does it make any assumptions about the planarity of the object or scene. Second, with such a resulting articulated model, a robotic system is then able to manipulate the object either along its joint axes at a specified grasp point in order to exercise its degrees of freedom or move its end effector to a particular position even if the point is not visible in the current view. This is one of the main advantages of the occlusion-aware approach, because the models capture all sides of the object meaning that the robot has knowledge of parts of the object that are not visible in the current view. Experiments with a PUMA 500 robotic arm demonstrate the effectiveness of the approach on a variety of real-world objects containing both revolute and prismatic joints. Third, we improve the proposed approach by using a RGBD sensor (Microsoft Kinect) that yield a depth value for each pixel immediately by the sensor itself rather than requiring correspondence to establish depth. KinectFusion algorithm is applied to produce a single high-quality, geometrically accurate 3D model from which rigid links of the object are segmented and aligned, allowing the joint axes to be estimated using the geometric approach. The improved algorithm does not require artificial markers attached to objects, yields much denser 3D models and reduces the computation time

    Metodo di Hole Filling su mesh poligonali mediante proiezione di pattern laser

    Get PDF
    Il lavoro si colloca nel settore della Computer Graphics (più precisamente della scansione e ricostruzione 3D) e presenta una metodologia per il completamento di scansioni parziali mediante proiezione di pattern sugli oggetti originali. Una delle problematiche della scansione laser è la presenza di buchi nei modelli 3D prodotti. Tali mancanze sono dovute in genere alla presenza di superfici difficilmente raggiungibili con uno scanner, o a campagne di acquisizione non accurate. Le metodologie automatiche di chiusura di questi artefatti possono introdurre geometria non reale nel caso di buchi di grandi dimensioni. Il sistema proposto utilizza un pattern laser predefinito proiettato sull’oggetto reale: tramite l’allineamento di alcune immagini di questo pattern sul modello 3D di partenza, è possibile ricostruire parte della geometria: in questo modo il buco viene suddiviso in una serie di buchi più piccoli, chiudibili con minore rischio di creazione di artefatti. La ricostruzione della geometria è ottenuta tramite l'analisi della distorsione che i pattern disegnano sulla superficie. L'approccio è stato implementato nell'ambito di un'applicazione semiautomatica: questo permette il completamento di scansioni parziali in pochi minuti. The proposed work stands in the research field of Computer Graphics (regarding in particular 3D scanning and geometry reconstruction). It presents a methodology for the completion of partial 3D models using a laser pattern projected on the real object. One of the issues in the field of 3D Scanning is the presence of holes in the final 3D models. This is usually due either to the fact that some parts of the surface are hard to reach by the scanner, or to an inaccurate scanning campaign. Automatic hole filling methods could fail in the case of bigger holes: the main risk is the creation of non existing geometry. The proposed system uses a pre-defined laser pattern which is projected on the real object. The alignment of some images depicting the pattern projected on the model permits to reconstruct part of the original geometry. Hence, the “big hole” problem is subdivided in a series of “small holes” problems, where the risk of creating artifacts is much smaller. Geometry reconstruction is obtained via the analysis of the distortion of the pattern projected on the object. The approach has been implemented in the context of a semiautomatic tool which permits to complete the geometry reconstruction in a few minutes
    corecore