6 research outputs found

    Satellite Articulation Sensing using Computer Vision

    Get PDF
    Autonomous on-orbit satellite servicing benefits from an inspector satellite that can gain as much information as possible about the primary satellite. This includes performance of articulated objects such as solar arrays, antennas, and sensors. A method for building an articulated model from monocular imagery using tracked feature points and the known relative inspection route is developed. Two methods are also developed for tracking the articulation of a satellite in real-time given an articulated model using both tracked feature points and image silhouettes. Performance is evaluated for multiple inspection routes and the effect of inspection route noise is assessed. Additionally, a satellite model is built and used to collect stop-motion images simulating articulated motion over an inspection route under simulated space illumination. The images are used in the silhouette articulation tracking method and successful tracking is demonstrated qualitatively. Finally, a human pose tracking algorithm is modified for tracking the satellite articulation demonstrating the applicability of human tracking methods to satellite articulation tracking methods when an articulated model is available

    Recovering articulated non-rigid shapes, motions and kinematic chains from video

    Get PDF
    Recovering articulated shape and motion, especially human body motion, from video is a challenging problem with a wide range of applications in medical study, sport analysis and animation, etc. Previous work on articulated motion recovery generally requires prior knowledge of the kinematic chain and usually does not concern the recovery of the articulated shape. The non-rigidity of some articulated part, e.g. human body motion with non-rigid facial motion, is completely ignored. We propose a factorization-based approach to recover the shape, motion and kinematic chain of an articulated object with non-rigid parts altogether directly from video sequences under a unified framework. The proposed approach is based on our modeling of the articulated non-rigid motion as a set of intersecting motion subspaces. A motion subspace is the linear subspace of the trajectories of an object. It can model a rigid or non-rigid motion. The intersection of two motion subspaces of linked parts models the motion of an articulated joint or axis. Our approach consists of algorithms for motion segmentation, kinematic chain building, and shape recovery. It is robust to outliers and can be automated. We test our approach through synthetic and real experiments and demonstrate how to recover articulated structure with non-rigid parts via a single-view camera without prior knowledge of its kinematic chain

    Robust and Accurate Structure from Motion of Rigid and Nonrigid Objects

    Get PDF
    As a central theme in computer vision, the problem of 3D structure and motion recovery from image sequences has been widely studied during the past three decades, and considerable progress has been made in theory, as well as in prac- tice. However, there are still several challenges remaining, including algorithm robustness and accuracy, especially for nonrigid modeling. The thesis focuses on solving these challenges and several new robust and accurate algorithms have been proposed. The first part of the thesis reviews the state-of-the-art techniques of structure and motion factorization. First, an introduction of structure from motion and some mathematical background of the technique is presented. Then, the general idea and different formulations of structure from motion for rigid and nonrigid objects are discussed. The second part covers the proposed quasi-perspective projection model and its application to structure and motion factorization. Previous algorithms are based on either a simplified affine assumption or a complicated full perspective projection model. The affine model is widely adopted due to its simplicity, whereas the extension to full perspective suffers from recovering projective depths. A quasi-perspective model is proposed to fill the gap between the two models. It is more accurate than the affine model from both theoretical analysis and experimental studies. More geometric properties of the model are investigated in the context of one- and two-view geometry. Finally, the model was applied to structure from motion and a framework of rigid and nonrigid factorization under quasi-perspective assumption is established. The last part of the thesis is focused on the robustness and three new al- gorithms are proposed. First, a spatial-and-temporal-weighted factorization algorithm is proposed to handle significant image noise, where the uncertainty of image measurement is estimated from a new perspective by virtue of repro- jection residuals. Second, a rank-4 affine factorization algorithm is proposed to avoid the difficulty of image alignment with erroneous data, followed by a robust factorization scheme that can work with missing and outlying data. Third, the robust algorithm is extended to nonrigid scenarios and a new augmented nonrigid factorization algorithm is proposed to handle imperfect tracking data. The main contributions of the thesis are as follows: The proposed quasi- perspective projection model fills the gap between the simplicity of the affine model and the accuracy of the perspective model. Its application to structure and motion factorization greatly increases the efficiency and accuracy of the algorithm. The proposed robust algorithms do not require prior information of image measurement and greatly improve the overall accuracy and robustness of previous approaches. Moreover, the algorithms can also be applied directly to structure from motion of nonrigid objects

    A Factorization-Based Approach to Articulated Motion Recovery

    No full text

    A Factorization-Based Approach to Articulated Motion Recovery

    No full text
    corecore