1,103 research outputs found
The Study Variety of Conformal Kinematics
We introduce the Study variety of conformal kinematics and investigate some
of its properties. The Study variety is a projective variety of dimension ten
and degree twelve in real projective space of dimension 15, and it generalizes
the well-known Study quadric model of rigid body kinematics. Despite its high
dimension, co-dimension, and degree it is amenable to concrete calculations via
conformal geometric algebra (CGA) associated to three-dimensional Euclidean
space. Calculations are facilitated by a four quaternion representation which
extends the dual quaternion description of rigid body kinematics. In
particular, we study straight lines on the Study variety. It turns out that
they are related to a class of one-parametric conformal motions introduced by
L. Dorst in 2016. Similar to rigid body kinematics, straight lines (that is,
Dorst's motions) are important for the decomposition of rational conformal
motions into lower degree motions via the factorization of certain polynomials
with coefficients in CGA
A factorization approach to inertial affine structure from motion
We consider the problem of reconstructing a 3-D scene from a moving camera with high frame rate using the affine projection model. This problem is traditionally known as Affine Structure from Motion (Affine SfM), and can be solved using an elegant low-rank factorization formulation. In this paper, we assume that an accelerometer and gyro are rigidly mounted with the camera, so that synchronized linear acceleration and angular velocity measurements are available together with the image measurements. We extend the standard Affine SfM algorithm to integrate these measurements through the use of image derivatives
A factorization approach to inertial affine structure from motion
We consider the problem of reconstructing a 3-D scene from a moving camera with high frame rate using the affine projection model. This problem is traditionally known as Affine Structure from Motion (Affine SfM), and can be solved using an elegant low-rank factorization formulation. In this paper, we assume that an accelerometer and gyro are rigidly mounted with the camera, so that synchronized linear acceleration and angular velocity measurements are available together with the image measurements. We extend the standard Affine SfM algorithm to integrate these measurements through the use of image derivatives
Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian Perspective
This paper addresses the task of dense non-rigid structure-from-motion
(NRSfM) using multiple images. State-of-the-art methods to this problem are
often hurdled by scalability, expensive computations, and noisy measurements.
Further, recent methods to NRSfM usually either assume a small number of sparse
feature points or ignore local non-linearities of shape deformations, and thus
cannot reliably model complex non-rigid deformations. To address these issues,
in this paper, we propose a new approach for dense NRSfM by modeling the
problem on a Grassmann manifold. Specifically, we assume the complex non-rigid
deformations lie on a union of local linear subspaces both spatially and
temporally. This naturally allows for a compact representation of the complex
non-rigid deformation over frames. We provide experimental results on several
synthetic and real benchmark datasets. The procured results clearly demonstrate
that our method, apart from being scalable and more accurate than
state-of-the-art methods, is also more robust to noise and generalizes to
highly non-linear deformations.Comment: 10 pages, 7 figure, 4 tables. Accepted for publication in Conference
on Computer Vision and Pattern Recognition (CVPR), 2018, typos fixed and
acknowledgement adde
Three dimensional transparent structure segmentation and multiple 3D motion estimation from monocular perspective image sequences
A three dimensional scene can be segmented using different cues, such as boundaries, texture, motion, discontinuities of the optical flow, stereo, models for structure, etc. We investigate segmentation based upon one of these cues, namely three dimensional motion. If the scene contain transparent objects, the two dimensional (local) cues are inconsistent, since neighboring points with similar optical flow can correspond to different objects. We present a method for performing three dimensional motion-based segmentation of (possibly) transparent scenes together with recursive estimation of the motion of each independent rigid object from monocular perspective images. Our algorithm is based on a recently proposed method for rigid motion reconstruction and a validation test which allows us to initialize the scheme and detect outliers during the motion estimation procedure. The scheme is tested on challenging real and synthetic image sequences. Segmentation is performed for the Ullmann's experiment of two transparent cylinders rotating about the same axis in opposite directions
Multilinear Factorizations for Multi-Camera Rigid Structure from Motion Problems
Camera networks have gained increased importance in recent years. Existing approaches mostly use point correspondences between different camera views to calibrate such systems. However, it is often difficult or even impossible to establish such correspondences. But even without feature point correspondences between different camera views, if the cameras are temporally synchronized then the data from the cameras are strongly linked together by the motion correspondence: all the cameras observe the same motion. The present article therefore develops the necessary theory to use this motion correspondence for general rigid as well as planar rigid motions. Given multiple static affine cameras which observe a rigidly moving object and track feature points located on this object, what can be said about the resulting point trajectories? Are there any useful algebraic constraints hidden in the data? Is a 3D reconstruction of the scene possible even if there are no point correspondences between the different cameras? And if so, how many points are sufficient? Is there an algorithm which warrants finding the correct solution to this highly non-convex problem? This article addresses these questions and thereby introduces the concept of low-dimensional motion subspaces. The constraints provided by these motion subspaces enable an algorithm which ensures finding the correct solution to this non-convex reconstruction problem. The algorithm is based on multilinear analysis, matrix and tensor factorizations. Our new approach can handle extreme configurations, e.g. a camera in a camera network tracking only one single point. Results on synthetic as well as on real data sequences act as a proof of concept for the presented insight
Deformable 3-D Modelling from Uncalibrated Video Sequences
Submitted for the degree of Doctor of Philosophy, Queen Mary, University of Londo
Generalizations of the projective reconstruction theorem
We present generalizations of the classic theorem of projective reconstruction as a tool for the design and analysis of the projective reconstruction algorithms. Our main focus is algorithms such as bundle adjustment and factorization-based techniques, which try to solve the projective equations directly for the structure points and projection matrices, rather than the so called tensor-based approaches. First, we consider the classic case of 3D to 2D projections. Our new theorem shows that projective reconstruction is possible under a much weaker restriction than requiring, a priori, that all estimated projective depths are nonzero. By completely specifying possible forms of wrong configurations when some of the projective depths are allowed to be zero, the theory enables us to present a class of depth constraints under which any reconstruction of cameras and points projecting into given image points is projectively equivalent to the true camera-point configuration. This is very useful for the design and analysis of different factorization-based algorithms. Here, we analyse several constraints used in the literature using our theory, and also demonstrate how our theory can be used for the design of new constraints with desirable properties. The next part of the thesis is devoted to projective reconstruction in arbitrary dimensions, which is important due to its applications in the analysis of dynamical scenes. The current theory, due to Hartley and Schaffalitzky, is based on the Grassmann tensor, generalizing the notions of Fundamental matrix, trifocal tensor and quardifocal tensor used for 3D to 2D projections. We extend their work by giving a theory whose point of departure is the projective equations rather than the Grassmann tensor. First, we prove the uniqueness of the Grassmann tensor corresponding to each set of image points, a question that remained open in the work of Hartley and Schaffalitzky. Then, we show that projective equivalence follows from the set of projective equations, provided that the depths are all nonzero. Finally, we classify possible wrong solutions to the projective factorization problem, where not all the projective depths are restricted to be nonzero. We test our theory experimentally by running the factorization based algorithms for rigid structure and motion in the case of 3D to 2D projections. We further run simulations for projections from higher dimensions. In each case, we present examples demonstrating how the algorithm can converge to the degenerate solutions introduced in the earlier chapters. We also show how the use of proper constraints can result in a better performance in terms of finding a correct solution
Structure from Recurrent Motion: From Rigidity to Recurrency
This paper proposes a new method for Non-Rigid Structure-from-Motion (NRSfM)
from a long monocular video sequence observing a non-rigid object performing
recurrent and possibly repetitive dynamic action. Departing from the
traditional idea of using linear low-order or lowrank shape model for the task
of NRSfM, our method exploits the property of shape recurrency (i.e., many
deforming shapes tend to repeat themselves in time). We show that recurrency is
in fact a generalized rigidity. Based on this, we reduce NRSfM problems to
rigid ones provided that certain recurrency condition is satisfied. Given such
a reduction, standard rigid-SfM techniques are directly applicable (without any
change) to the reconstruction of non-rigid dynamic shapes. To implement this
idea as a practical approach, this paper develops efficient algorithms for
automatic recurrency detection, as well as camera view clustering via a
rigidity-check. Experiments on both simulated sequences and real data
demonstrate the effectiveness of the method. Since this paper offers a novel
perspective on rethinking structure-from-motion, we hope it will inspire other
new problems in the field.Comment: To appear in CVPR 201
- …