79,986 research outputs found
Image-based photo hulls for fast and photo-realistic new view synthesis
We present an efficient image-based rendering algorithm that generates views of a scene's photo hull. The photo hull is the largest 3D shape that is photo-consistent with photographs taken of the scene from multiple viewpoints. Our algorithm, image-based photo hulls (IBPH), like the image-based visual hulls (IBVH) algorithm from Matusik et al. on which it is based, takes advantage of epipolar geometry to efficiently reconstruct the geometry and visibility of a scene. Our IBPH algorithm differs from IBVH in that it utilizes the color information of the images to identify scene geometry. These additional color constraints result in more accurately reconstructed geometry, which often projects to better synthesized virtual views of the scene. We demonstrate our algorithm running in a realtime 3D telepresence application using video data acquired from multiple viewpoints
Recommended from our members
Image-Based 3D Photography Using Opacity Hulls
We have built a system for acquiring and displaying high quality graphical models of objects that are impossible to scan with traditional scanners. Our system can acquire highly specular and fuzzy materials, such as fur and feathers. The hardware set-up consists of a turntable, two plasma displays, an array of cameras, and a rotating array of directional lights. We use multi-background matting techniques to acquire alpha mattes of the object from multiple viewpoints. The alpha mattes are used to construct an opacity hull. The opacity hull is a new shape representation, defined as the visual hull of the object with view-dependent opacity. It enables visualization of complex object silhouettes and seamless blending of objects into new environments. Our system also supports relighting of objects with arbitrary appearance using surface reflectance fields, a purely image-based appearance representation. Our system is the first to acquire and render surface reflectance fields under varying illumination from arbitrary viewpoints. We have built three generations of digitizers with increasing sophistication. In this paper, we present our results from digitizing hundreds of models.Engineering and Applied Science
Contour Generator Points for Threshold Selection and a Novel Photo-Consistency Measure for Space Carving
Space carving has emerged as a powerful method for multiview scene reconstruction. Although a wide variety of methods have been proposed, the quality of the reconstruction remains highly-dependent on the photometric consistency measure, and the threshold used to carve away voxels. In this paper, we present a novel photo-consistency measure that is motivated by a multiset variant of the chamfer distance. The new measure is robust to high amounts of within-view color variance and also takes into account the projection angles of back-projected pixels.
Another critical issue in space carving is the selection of the photo-consistency threshold used to determine what surface voxels are kept or carved away. In this paper, a reliable threshold selection technique is proposed that examines the photo-consistency values at contour generator points. Contour generators are points that lie on both the surface of the object and the visual hull. To determine the threshold, a percentile ranking of the photo-consistency values of these generator points is used. This improved technique is applicable to a wide variety of photo-consistency measures, including the new measure presented in this paper. Also presented in this paper is a method to choose between photo-consistency measures, and voxel array resolutions prior to carving using receiver operating characteristic (ROC) curves
Image based visual servoing using bitangent points applied to planar shape alignment
We present visual servoing strategies based on bitangents for aligning planar shapes. In order to acquire bitangents we use convex-hull of a curve. Bitangent points are employed in the construction of a feature vector to be used in visual control. Experimental results obtained on a 7 DOF Mitsubishi PA10 robot, verifies the proposed method
General Dynamic Scene Reconstruction from Multiple View Video
This paper introduces a general approach to dynamic scene reconstruction from
multiple moving cameras without prior knowledge or limiting constraints on the
scene structure, appearance, or illumination. Existing techniques for dynamic
scene reconstruction from multiple wide-baseline camera views primarily focus
on accurate reconstruction in controlled environments, where the cameras are
fixed and calibrated and background is known. These approaches are not robust
for general dynamic scenes captured with sparse moving cameras. Previous
approaches for outdoor dynamic scene reconstruction assume prior knowledge of
the static background appearance and structure. The primary contributions of
this paper are twofold: an automatic method for initial coarse dynamic scene
segmentation and reconstruction without prior knowledge of background
appearance or structure; and a general robust approach for joint segmentation
refinement and dense reconstruction of dynamic scenes from multiple
wide-baseline static or moving cameras. Evaluation is performed on a variety of
indoor and outdoor scenes with cluttered backgrounds and multiple dynamic
non-rigid objects such as people. Comparison with state-of-the-art approaches
demonstrates improved accuracy in both multiple view segmentation and dense
reconstruction. The proposed approach also eliminates the requirement for prior
knowledge of scene structure and appearance
Multi-camera complexity assessment system for assembly line work stations
In the last couple of years, the market demands an increasing number of product variants. This leads to an inevitable rise of the complexity in manufacturing systems. A model to quantify the complexity in a workstation has been developed, but part of the analysis is done manually. Thereto, this paper presents the results of an industrial proof-of-concept in which the possibility of automating the complexity analysis using multi camera video images, was tested
Topomap: Topological Mapping and Navigation Based on Visual SLAM Maps
Visual robot navigation within large-scale, semi-structured environments
deals with various challenges such as computation intensive path planning
algorithms or insufficient knowledge about traversable spaces. Moreover, many
state-of-the-art navigation approaches only operate locally instead of gaining
a more conceptual understanding of the planning objective. This limits the
complexity of tasks a robot can accomplish and makes it harder to deal with
uncertainties that are present in the context of real-time robotics
applications. In this work, we present Topomap, a framework which simplifies
the navigation task by providing a map to the robot which is tailored for path
planning use. This novel approach transforms a sparse feature-based map from a
visual Simultaneous Localization And Mapping (SLAM) system into a
three-dimensional topological map. This is done in two steps. First, we extract
occupancy information directly from the noisy sparse point cloud. Then, we
create a set of convex free-space clusters, which are the vertices of the
topological map. We show that this representation improves the efficiency of
global planning, and we provide a complete derivation of our algorithm.
Planning experiments on real world datasets demonstrate that we achieve similar
performance as RRT* with significantly lower computation times and storage
requirements. Finally, we test our algorithm on a mobile robotic platform to
prove its advantages.Comment: 8 page
- …
