2,852 research outputs found
Plane-Based Optimization of Geometry and Texture for RGB-D Reconstruction of Indoor Scenes
We present a novel approach to reconstruct RGB-D indoor scene with plane
primitives. Our approach takes as input a RGB-D sequence and a dense coarse
mesh reconstructed by some 3D reconstruction method on the sequence, and
generate a lightweight, low-polygonal mesh with clear face textures and sharp
features without losing geometry details from the original scene. To achieve
this, we firstly partition the input mesh with plane primitives, simplify it
into a lightweight mesh next, then optimize plane parameters, camera poses and
texture colors to maximize the photometric consistency across frames, and
finally optimize mesh geometry to maximize consistency between geometry and
planes. Compared to existing planar reconstruction methods which only cover
large planar regions in the scene, our method builds the entire scene by
adaptive planes without losing geometry details and preserves sharp features in
the final mesh. We demonstrate the effectiveness of our approach by applying it
onto several RGB-D scans and comparing it to other state-of-the-art
reconstruction methods.Comment: in International Conference on 3D Vision 2018; Models and Code: see
https://github.com/chaowang15/plane-opt-rgbd. arXiv admin note: text overlap
with arXiv:1905.0885
Scalable Dense Monocular Surface Reconstruction
This paper reports on a novel template-free monocular non-rigid surface
reconstruction approach. Existing techniques using motion and deformation cues
rely on multiple prior assumptions, are often computationally expensive and do
not perform equally well across the variety of data sets. In contrast, the
proposed Scalable Monocular Surface Reconstruction (SMSR) combines strengths of
several algorithms, i.e., it is scalable with the number of points, can handle
sparse and dense settings as well as different types of motions and
deformations. We estimate camera pose by singular value thresholding and
proximal gradient. Our formulation adopts alternating direction method of
multipliers which converges in linear time for large point track matrices. In
the proposed SMSR, trajectory space constraints are integrated by smoothing of
the measurement matrix. In the extensive experiments, SMSR is demonstrated to
consistently achieve state-of-the-art accuracy on a wide variety of data sets.Comment: International Conference on 3D Vision (3DV), Qingdao, China, October
201
Principal Boundary on Riemannian Manifolds
We consider the classification problem and focus on nonlinear methods for
classification on manifolds. For multivariate datasets lying on an embedded
nonlinear Riemannian manifold within the higher-dimensional ambient space, we
aim to acquire a classification boundary for the classes with labels, using the
intrinsic metric on the manifolds. Motivated by finding an optimal boundary
between the two classes, we invent a novel approach -- the principal boundary.
From the perspective of classification, the principal boundary is defined as an
optimal curve that moves in between the principal flows traced out from two
classes of data, and at any point on the boundary, it maximizes the margin
between the two classes. We estimate the boundary in quality with its
direction, supervised by the two principal flows. We show that the principal
boundary yields the usual decision boundary found by the support vector machine
in the sense that locally, the two boundaries coincide. Some optimality and
convergence properties of the random principal boundary and its population
counterpart are also shown. We illustrate how to find, use and interpret the
principal boundary with an application in real data.Comment: 31 pages,10 figure
GASP : Geometric Association with Surface Patches
A fundamental challenge to sensory processing tasks in perception and
robotics is the problem of obtaining data associations across views. We present
a robust solution for ascertaining potentially dense surface patch (superpixel)
associations, requiring just range information. Our approach involves
decomposition of a view into regularized surface patches. We represent them as
sequences expressing geometry invariantly over their superpixel neighborhoods,
as uniquely consistent partial orderings. We match these representations
through an optimal sequence comparison metric based on the Damerau-Levenshtein
distance - enabling robust association with quadratic complexity (in contrast
to hitherto employed joint matching formulations which are NP-complete). The
approach is able to perform under wide baselines, heavy rotations, partial
overlaps, significant occlusions and sensor noise.
The technique does not require any priors -- motion or otherwise, and does
not make restrictive assumptions on scene structure and sensor movement. It
does not require appearance -- is hence more widely applicable than appearance
reliant methods, and invulnerable to related ambiguities such as textureless or
aliased content. We present promising qualitative and quantitative results
under diverse settings, along with comparatives with popular approaches based
on range as well as RGB-D data.Comment: International Conference on 3D Vision, 201
Deformable and articulated 3D reconstruction from monocular video sequences
PhDThis thesis addresses the problem of deformable and articulated structure from motion from
monocular uncalibrated video sequences. Structure from motion is defined as the problem of
recovering information about the 3D structure of scenes imaged by a camera in a video sequence.
Our study aims at the challenging problem of non-rigid shapes (e.g. a beating heart or a smiling
face). Non-rigid structures appear constantly in our everyday life, think of a bicep curling, a
torso twisting or a smiling face. Our research seeks a general method to perform 3D shape
recovery purely from data, without having to rely on a pre-computed model or training data.
Open problems in the field are the difficulty of the non-linear estimation, the lack of a real-time
system, large amounts of missing data in real-world video sequences, measurement noise and
strong deformations. Solving these problems would take us far beyond the current state of the
art in non-rigid structure from motion. This dissertation presents our contributions in the field
of non-rigid structure from motion, detailing a novel algorithm that enforces the exact metric
structure of the problem at each step of the minimisation by projecting the motion matrices
onto the correct deformable or articulated metric motion manifolds respectively. An important
advantage of this new algorithm is its ability to handle missing data which becomes crucial
when dealing with real video sequences. We present a generic bilinear estimation framework,
which improves convergence and makes use of the manifold constraints. Finally, we demonstrate
a sequential, frame-by-frame estimation algorithm, which provides a 3D model and camera
parameters for each video frame, while simultaneously building a model of object deformation
A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
- …