24 research outputs found

    Dynamic Rigid Motion Estimation From Weak Perspective

    Get PDF
    “Weak perspective” represents a simplified projection model that approximates the imaging process when the scene is viewed under a small viewing angle and its depth relief is small relative to its distance from the viewer. We study how to generate dynamic models for estimating rigid 3D motion from weak perspective. A crucial feature in dynamic visual motion estimation is to decouple structure from motion in the estimation model. The reasons are both geometric-to achieve global observability of the model-and practical, for a structure independent motion estimator allows us to deal with occlusions and appearance of new features in a principled way. It is also possible to push the decoupling even further, and isolate the motion parameters that are affected by the so called “bas relief ambiguity” from the ones that are not. We present a novel method for reducing the order of the estimator by decoupling portions of the state space from the time evolution of the measurement constraint. We use this method to construct an estimator of full rigid motion (modulo a scaling factor) on a six dimensional state space, an approximate estimator for a four dimensional subset of the motion space, and a reduced filter with only two states. The latter two are immune to the bas relief ambiguity. We compare strengths and weaknesses of each of the schemes on real and synthetic image sequences

    Windowed Factorization and Merging

    Get PDF
    In this work, an online 3D reconstruction algorithm is proposed which attempts to solve the structure from motion problem for occluded and degenerate data. To deal with occlusion the temporal consistency of data within a limited window is used to compute local reconstructions. These local reconstructions are transformed and merged to obtain an estimation of the 3D object shape. The algorithm is shown to accurately reconstruct a rotating and translating artificial sphere and a rotating toy dinosaur from a video. The proposed algorithm (WIFAME) provides a versatile framework to deal with missing data in the structure from motion problem

    Reducing "Structure From Motion": a General Framework for Dynamic Vision - Part 1: Modeling

    Get PDF
    The literature on recursive estimation of structure and motion from monocular image sequences comprises a large number of different models and estimation techniques. We propose a framework that allows us to derive and compare all models by following the idea of dynamical system reduction. The "natural" dynamic model, derived by the rigidity constraint and the perspective projection, is first reduced by explicitly decoupling structure (depth) from motion. Then implicit decoupling techniques are explored, which consist of imposing that some function of the unknown parameters is held constant. By appropriately choosing such a function, not only can we account for all models seen so far in the literature, but we can also derive novel ones

    Reducing “Structure from Motion”: a general framework for dynamic vision. 1. Modeling

    Get PDF
    The literature on recursive estimation of structure and motion from monocular image sequences comprises a large number of apparently unrelated models and estimation techniques. We propose a framework that allows us to derive and compare all models by following the idea of dynamical system reduction. The “natural” dynamic model, derived from the rigidity constraint and the projection model, is first reduced by explicitly decoupling structure (depth) from motion. Then, implicit decoupling techniques are explored, which consist of imposing that some function of the unknown parameters is held constant. By appropriately choosing such a function, not only can we account for models seen so far in the literature, but we can also derive novel ones

    Multiple Camera Calibration using Robust Perspective Factorization

    Get PDF
    International audienceIn this paper we address the problem of recovering structure and motion from a large number of intrinsically calibrated perspective cameras. We describe a method that combines (1) weak-perspective reconstruction in the presence of noisy and missing data and (2) an algorithm that updates weakperspective reconstruction to perspective reconstruction by incrementally estimating the projective depths. The method also solves for the reversal ambiguity associated with affine factorization techniques. The method has been successfully applied to the problem of calibrating the external parameters (position and orientation) of several multiple-camera setups. Results obtained with synthetic and experimental data compare favourably with results obtained with nonlinear minimization such as bundle adjustment

    Recovering articulated non-rigid shapes, motions and kinematic chains from video

    Get PDF
    Recovering articulated shape and motion, especially human body motion, from video is a challenging problem with a wide range of applications in medical study, sport analysis and animation, etc. Previous work on articulated motion recovery generally requires prior knowledge of the kinematic chain and usually does not concern the recovery of the articulated shape. The non-rigidity of some articulated part, e.g. human body motion with non-rigid facial motion, is completely ignored. We propose a factorization-based approach to recover the shape, motion and kinematic chain of an articulated object with non-rigid parts altogether directly from video sequences under a unified framework. The proposed approach is based on our modeling of the articulated non-rigid motion as a set of intersecting motion subspaces. A motion subspace is the linear subspace of the trajectories of an object. It can model a rigid or non-rigid motion. The intersection of two motion subspaces of linked parts models the motion of an articulated joint or axis. Our approach consists of algorithms for motion segmentation, kinematic chain building, and shape recovery. It is robust to outliers and can be automated. We test our approach through synthetic and real experiments and demonstrate how to recover articulated structure with non-rigid parts via a single-view camera without prior knowledge of its kinematic chain

    A Factorization Based Algorithm for multi-Image Projective Structure and Motion

    Get PDF
    International audienceWe propose a method for the recovery of projective shape and motion from multiple images of a scene by the factorization of a matrix containing the images of all points in all views. This factorization is only possible when the image points are correctly scaled. The major technical contribution of this paper is a practical method for the recovery of these scalings, using only fundamental matrices and epipoles estimated from the image data. The resulting projective reconstruction algorithm runs quickly and provides accurate reconstructions. Results are presented for simulated and real images

    3D Non-Rigid Reconstruction with Prior Shape Constraints

    Get PDF
    3D non-rigid shape recovery from a single uncalibrated camera is a challenging, under-constrained problem in computer vision. Although tremendous progress has been achieved towards solving the problem, two main limitations still exist in most previous solutions. First, current methods focus on non-incremental solutions, that is, the algorithms require collection of all the measurement data before the reconstruction takes place. This methodology is inherently unsuitable for applications requiring real-time solutions. At the same time, most of the existing approaches assume that 3D shapes can be accurately modelled in a linear subspace. These methods are simple and have been proven effective for reconstructions of objects with relatively small deformations, but have considerable limitations when the deformations are large or complex. The non-linear deformations are often observed in highly flexible objects for which the use of the linear model is impractical. Note that specific types of shape variation might be governed by only a small number of parameters and therefore can be well-represented in a low dimensional manifold. The methods proposed in this thesis aim to estimate the non-rigid shapes and the corresponding camera trajectories, based on both the observations and the prior learned manifold. Firstly, an incremental approach is proposed for estimating the deformable objects. An important advantage of this method is the ability to reconstruct the 3D shape from a newly observed image and update the parameters in 3D shape space. However, this recursive method assumes the deformable shapes only have small variations from a mean shape, thus is still not feasible for objects subject to large scale deformations. To address this problem, a series of approaches are proposed, all based on non-linear manifold learning techniques. Such manifold is used as a shape prior, with the reconstructed shapes constrained to lie within the manifold. Those non-linear manifold based approaches significantly improve the quality of reconstructed results and are well-adapted to different types of shapes undergoing significant and complex deformations. Throughout the thesis, methods are validated quantitatively on 2D points sequences projected from the 3D motion capture data for a ground truth comparison, and are qualitatively demonstrated on real example of 2D video sequences. Comparisons are made for the proposed methods against several state-of-the-art techniques, with results shown for a variety of challenging deformable objects. Extensive experiments also demonstrate the robustness of the proposed algorithms with respect to measurement noise and missing data

    DATA-DRIVEN FACIAL IMAGE SYNTHESIS FROM POOR QUALITY LOW RESOLUTION IMAGE

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Rank classification of linear line structure in determining trifocal tensor.

    Get PDF
    Zhao, Ming.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (p. 111-117) and index.Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation --- p.1Chapter 1.2 --- Objective of the study --- p.2Chapter 1.3 --- Challenges and our approach --- p.4Chapter 1.4 --- Original contributions --- p.6Chapter 1.5 --- Organization of this dissertation --- p.6Chapter 2 --- Related Work --- p.9Chapter 2.1 --- Critical configuration for motion estimation and projective reconstruction --- p.9Chapter 2.1.1 --- Point feature --- p.9Chapter 2.1.2 --- Line feature --- p.12Chapter 2.2 --- Camera motion estimation --- p.14Chapter 2.2.1 --- Line tracking --- p.15Chapter 2.2.2 --- Determining camera motion --- p.19Chapter 3 --- Preliminaries on Three-View Geometry and Trifocal Tensor --- p.23Chapter 3.1 --- Projective spaces P3 and transformations --- p.23Chapter 3.2 --- The trifocal tensor --- p.24Chapter 3.3 --- Computation of the trifocal tensor-Normalized linear algorithm --- p.31Chapter 4 --- Linear Line Structures --- p.33Chapter 4.1 --- Models of line space --- p.33Chapter 4.2 --- Line structures --- p.35Chapter 4.2.1 --- Linear line space --- p.37Chapter 4.2.2 --- Ruled surface --- p.37Chapter 4.2.3 --- Line congruence --- p.38Chapter 4.2.4 --- Line complex --- p.38Chapter 5 --- Critical Configurations of Three Views Revealed by Line Correspondences --- p.41Chapter 5.1 --- Two-view degeneracy --- p.41Chapter 5.2 --- Three-view degeneracy --- p.42Chapter 5.2.1 --- Introduction --- p.42Chapter 5.2.2 --- Linear line space --- p.44Chapter 5.2.3 --- Linear ruled surface --- p.54Chapter 5.2.4 --- Linear line congruence --- p.55Chapter 5.2.5 --- Linear line complex --- p.57Chapter 5.3 --- Retrieving tensor in critical configurations --- p.60Chapter 5.4 --- Rank classification of non-linear line structures --- p.61Chapter 6 --- Camera Motion Estimation Framework --- p.63Chapter 6.1 --- Line extraction --- p.64Chapter 6.2 --- Line tracking --- p.65Chapter 6.2.1 --- Preliminary geometric tracking --- p.65Chapter 6.2.2 --- Experimental results --- p.69Chapter 6.3 --- Camera motion estimation framework using EKF --- p.71Chapter 7 --- Experimental Results --- p.75Chapter 7.1 --- Simulated data experiments --- p.75Chapter 7.2 --- Real data experiments --- p.76Chapter 7.2.1 --- Linear line space --- p.80Chapter 7.2.2 --- Linear ruled surface --- p.84Chapter 7.2.3 --- Linear line congruence --- p.84Chapter 7.2.4 --- Linear line complex --- p.91Chapter 7.3 --- Empirical observation: ruled plane for line transfer --- p.93Chapter 7.4 --- Simulation for non-linear line structures --- p.94Chapter 8 --- Conclusions and Future Work --- p.97Chapter 8.1 --- Summary --- p.97Chapter 8.2 --- Future work --- p.99Chapter A --- Notations --- p.101Chapter B --- Tensor --- p.103Chapter C --- Matrix Decomposition and Estimation Techniques --- p.104Chapter D --- MATLAB Files --- p.107Chapter D.1 --- Estimation matrix --- p.107Chapter D.2 --- Line transfer --- p.109Chapter D.3 --- Simulation --- p.10
    corecore