6,333 research outputs found

    Shape basis interpretation for monocular deformable 3D reconstruction

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper, we propose a novel interpretable shape model to encode object non-rigidity. We first use the initial frames of a monocular video to recover a rest shape, used later to compute a dissimilarity measure based on a distance matrix measurement. Spectral analysis is then applied to this matrix to obtain a reduced shape basis, that in contrast to existing approaches, can be physically interpreted. In turn, these pre-computed shape bases are used to linearly span the deformation of a wide variety of objects. We introduce the low-rank basis into a sequential approach to recover both camera motion and non-rigid shape from the monocular video, by simply optimizing the weights of the linear combination using bundle adjustment. Since the number of parameters to optimize per frame is relatively small, specially when physical priors are considered, our approach is fast and can potentially run in real time. Validation is done in a wide variety of real-world objects, undergoing both inextensible and extensible deformations. Our approach achieves remarkable robustness to artifacts such as noisy and missing measurements and shows an improved performance to competing methods.Peer ReviewedPostprint (author's final draft

    Intrinsic Dynamic Shape Prior for Fast, Sequential and Dense Non-Rigid Structure from Motion with Detection of Temporally-Disjoint Rigidity

    No full text
    While dense non-rigid structure from motion (NRSfM) has been extensively studied from the perspective of the reconstructability problem over the recent years, almost no attempts have been undertaken to bring it into the practical realm. The reasons for the slow dissemination are the severe ill-posedness, high sensitivity to motion and deformation cues and the difficulty to obtain reliable point tracks in the vast majority of practical scenarios. To fill this gap, we propose a hybrid approach that extracts prior shape knowledge from an input sequence with NRSfM and uses it as a dynamic shape prior for sequential surface recovery in scenarios with recurrence. Our Dynamic Shape Prior Reconstruction (DSPR) method can be combined with existing dense NRSfM techniques while its energy functional is optimised with stochastic gradient descent at real-time rates for new incoming point tracks. The proposed versatile framework with a new core NRSfM approach outperforms several other methods in the ability to handle inaccurate and noisy point tracks, provided we have access to a representative (in terms of the deformation variety) image sequence. Comprehensive experiments highlight convergence properties and the accuracy of DSPR under different disturbing effects. We also perform a joint study of tracking and reconstruction and show applications to shape compression and heart reconstruction under occlusions. We achieve state-of-the-art metrics (accuracy and compression ratios) in different scenarios

    Real-time 3D reconstruction of non-rigid shapes with a single moving camera

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This paper describes a real-time sequential method to simultaneously recover the camera motion and the 3D shape of deformable objects from a calibrated monocular video. For this purpose, we consider the Navier-Cauchy equations used in 3D linear elasticity and solved by finite elements, to model the time-varying shape per frame. These equations are embedded in an extended Kalman filter, resulting in sequential Bayesian estimation approach. We represent the shape, with unknown material properties, as a combination of elastic elements whose nodal points correspond to salient points in the image. The global rigidity of the shape is encoded by a stiffness matrix, computed after assembling each of these elements. With this piecewise model, we can linearly relate the 3D displacements with the 3D acting forces that cause the object deformation, assumed to be normally distributed. While standard finite-element-method techniques require imposing boundary conditions to solve the resulting linear system, in this work we eliminate this requirement by modeling the compliance matrix with a generalized pseudoinverse that enforces a pre-fixed rank. Our framework also ensures surface continuity without the need for a post-processing step to stitch all the piecewise reconstructions into a global smooth shape. We present experimental results using both synthetic and real videos for different scenarios ranging from isometric to elastic deformations. We also show the consistency of the estimation with respect to 3D ground truth data, include several experiments assessing robustness against artifacts and finally, provide an experimental validation of our performance in real time at frame rate for small mapsPeer ReviewedPostprint (author's final draft

    Deformable and articulated 3D reconstruction from monocular video sequences

    Get PDF
    PhDThis thesis addresses the problem of deformable and articulated structure from motion from monocular uncalibrated video sequences. Structure from motion is defined as the problem of recovering information about the 3D structure of scenes imaged by a camera in a video sequence. Our study aims at the challenging problem of non-rigid shapes (e.g. a beating heart or a smiling face). Non-rigid structures appear constantly in our everyday life, think of a bicep curling, a torso twisting or a smiling face. Our research seeks a general method to perform 3D shape recovery purely from data, without having to rely on a pre-computed model or training data. Open problems in the field are the difficulty of the non-linear estimation, the lack of a real-time system, large amounts of missing data in real-world video sequences, measurement noise and strong deformations. Solving these problems would take us far beyond the current state of the art in non-rigid structure from motion. This dissertation presents our contributions in the field of non-rigid structure from motion, detailing a novel algorithm that enforces the exact metric structure of the problem at each step of the minimisation by projecting the motion matrices onto the correct deformable or articulated metric motion manifolds respectively. An important advantage of this new algorithm is its ability to handle missing data which becomes crucial when dealing with real video sequences. We present a generic bilinear estimation framework, which improves convergence and makes use of the manifold constraints. Finally, we demonstrate a sequential, frame-by-frame estimation algorithm, which provides a 3D model and camera parameters for each video frame, while simultaneously building a model of object deformation

    Combining local-physical and global-statistical models for sequential deformable shape from motion

    Get PDF
    The final publication is available at link.springer.comIn this paper, we simultaneously estimate camera pose and non-rigid 3D shape from a monocular video, using a sequential solution that combines local and global representations. We model the object as an ensemble of particles, each ruled by the linear equation of the Newton's second law of motion. This dynamic model is incorporated into a bundle adjustment framework, in combination with simple regularization components that ensure temporal and spatial consistency. The resulting approach allows to sequentially estimate shape and camera poses, while progressively learning a global low-rank model of the shape that is fed back into the optimization scheme, introducing thus, global constraints. The overall combination of local (physical) and global (statistical) constraints yields a solution that is both efficient and robust to several artifacts such as noisy and missing data or sudden camera motions, without requiring any training data at all. Validation is done in a variety of real application domains, including articulated and non-rigid motion, both for continuous and discontinuous shapes. Our on-line methodology yields significantly more accurate reconstructions than competing sequential approaches, being even comparable to the more computationally demanding batch methods.Peer ReviewedPostprint (author's final draft

    Modal space: a physics-based model for sequential estimation of time-varying shape from monocular video

    Get PDF
    The final publication is available at link.springer.comThis paper describes two sequential methods for recovering the camera pose together with the 3D shape of highly deformable surfaces from a monocular video. The nonrigid 3D shape is modeled as a linear combination of mode shapes with time-varying weights that define the shape at each frame and are estimated on-the-fly. The low-rank constraint is combined with standard smoothness priors to optimize the model parameters over a sliding window of image frames. We propose to obtain a physics-based shape basis using the initial frames on the video to code the time-varying shape along the sequence, reducing the problem from trilinear to bilinear. To this end, the 3D shape is discretized by means of a soup of elastic triangular finite elements where we apply a force balance equation. This equation is solved using modal analysis via a simple eigenvalue problem to obtain a shape basis that encodes the modes of deformation. Even though this strategy can be applied in a wide variety of scenarios, when the observations are denser, the solution can become prohibitive in terms of computational load. We avoid this limitation by proposing two efficient coarse-to-fine approaches that allow us to easily deal with dense 3D surfaces. This results in a scalable solution that estimates a small number of parameters per frame and could potentially run in real time. We show results on both synthetic and real videos with ground truth 3D data, while robustly dealing with artifacts such as noise and missing data.Peer ReviewedPostprint (author's final draft

    Scalable Dense Monocular Surface Reconstruction

    Full text link
    This paper reports on a novel template-free monocular non-rigid surface reconstruction approach. Existing techniques using motion and deformation cues rely on multiple prior assumptions, are often computationally expensive and do not perform equally well across the variety of data sets. In contrast, the proposed Scalable Monocular Surface Reconstruction (SMSR) combines strengths of several algorithms, i.e., it is scalable with the number of points, can handle sparse and dense settings as well as different types of motions and deformations. We estimate camera pose by singular value thresholding and proximal gradient. Our formulation adopts alternating direction method of multipliers which converges in linear time for large point track matrices. In the proposed SMSR, trajectory space constraints are integrated by smoothing of the measurement matrix. In the extensive experiments, SMSR is demonstrated to consistently achieve state-of-the-art accuracy on a wide variety of data sets.Comment: International Conference on 3D Vision (3DV), Qingdao, China, October 201

    Sequential non-rigid structure from motion using physical priors

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.We propose a new approach to simultaneously recover camera pose and 3D shape of non-rigid and potentially extensible surfaces from a monocular image sequence. For this purpose, we make use of the Extended Kalman Filter based Simultaneous Localization And Mapping (EKF-SLAM) formulation, a Bayesian optimization framework traditionally used in mobile robotics for estimating camera pose and reconstructing rigid scenarios. In order to extend the problem to a deformable domain we represent the object's surface mechanics by means of Navier's equations, which are solved using a Finite Element Method (FEM). With these main ingredients, we can further model the material's stretching, allowing us to go a step further than most of current techniques, typically constrained to surfaces undergoing isometric deformations. We extensively validate our approach in both real and synthetic experiments, and demonstrate its advantages with respect to competing methods. More specifically, we show that besides simultaneously retrieving camera pose and non-rigid shape, our approach is adequate for both isometric and extensible surfaces, does not require neither batch processing all the frames nor tracking points over the whole sequence and runs at several frames per second.Peer ReviewedPostprint (author's final draft
    • …
    corecore