101 research outputs found

    Physics-Based Modeling of Nonrigid Objects for Vision and Graphics (Dissertation)

    Get PDF
    This thesis develops a physics-based framework for 3D shape and nonrigid motion modeling for computer vision and computer graphics. In computer vision it addresses the problems of complex 3D shape representation, shape reconstruction, quantitative model extraction from biomedical data for analysis and visualization, shape estimation, and motion tracking. In computer graphics it demonstrates the generative power of our framework to synthesize constrained shapes, nonrigid object motions and object interactions for the purposes of computer animation. Our framework is based on the use of a new class of dynamically deformable primitives which allow the combination of global and local deformations. It incorporates physical constraints to compose articulated models from deformable primitives and provides force-based techniques for fitting such models to sparse, noise-corrupted 2D and 3D visual data. The framework leads to shape and nonrigid motion estimators that exploit dynamically deformable models to track moving 3D objects from time-varying observations. We develop models with global deformation parameters which represent the salient shape features of natural parts, and local deformation parameters which capture shape details. In the context of computer graphics, these models represent the physics-based marriage of the parameterized and free-form modeling paradigms. An important benefit of their global/local descriptive power in the context of computer vision is that it can potentially satisfy the often conflicting requirements of shape reconstruction and shape recognition. The Lagrange equations of motion that govern our models, augmented by constraints, make them responsive to externally applied forces derived from input data or applied by the user. This system of differential equations is discretized using finite element methods and simulated through time using standard numerical techniques. We employ these equations to formulate a shape and nonrigid motion estimator. The estimator is a continuous extended Kalman filter that recursively transforms the discrepancy between the sensory data and the estimated model state into generalized forces. These adjust the translational, rotational, and deformational degrees of freedom such that the model evolves in a consistent fashion with the noisy data. We demonstrate the interactive time performance of our techniques in a series of experiments in computer vision, graphics, and visualization

    Physics-Based Modeling, Analysis and Animation

    Get PDF
    The idea of using physics-based models has received considerable interest in computer graphics and computer vision research the last ten years. The interest arises from the fact that simple geometric primitives cannot accurately represent natural objects. In computer graphics physics-based models are used to generate and visualize constrained shapes, motions of rigid and nonrigid objects and object interactions with the environment for the purposes of animation. On the other hand, in computer vision, the method applies to complex 3-D shape representation, shape reconstruction and motion estimation. In this paper we review two models that have been used in computer graphics and two models that apply to both areas. In the area of computer graphics, Miller [48] uses a mass-spring model to animate three forms of locomotion of snakes and worms. To overcome the problem of the multitude of degrees of freedom associated with the mass-spring lattices, Witkin and Welch [87] present a geometric method to model global deformations. To achieve the same result Pentland and Horowitz in [54] delineate the object motion into rigid and nonrigid deformation modes. To overcome problems of these two last approaches, Metaxas and Terzopoulos in [45] successfully combine local deformations with global ones. Modeling based on physical principles is a potent technique for computer graphics and computer vision. It is a rich and fruitful area for research in terms of both theory and applications. It is important, though, to develop concepts, methodologies, and techniques which will be widely applicable to many types of applications

    Geometry-Aware Network for Non-Rigid Shape Prediction from a Single View

    Get PDF
    We propose a method for predicting the 3D shape of a deformable surface from a single view. By contrast with previous approaches, we do not need a pre-registered template of the surface, and our method is robust to the lack of texture and partial occlusions. At the core of our approach is a {\it geometry-aware} deep architecture that tackles the problem as usually done in analytic solutions: first perform 2D detection of the mesh and then estimate a 3D shape that is geometrically consistent with the image. We train this architecture in an end-to-end manner using a large dataset of synthetic renderings of shapes under different levels of deformation, material properties, textures and lighting conditions. We evaluate our approach on a test split of this dataset and available real benchmarks, consistently improving state-of-the-art solutions with a significantly lower computational time.Comment: Accepted at CVPR 201

    Análise de Movimento Não Rígido em Visão por Computador

    Get PDF
    Neste artigo são apresentadas várias metodologias actualmente existentes, no domínio da Visão por Computador, para a análise de movimento não rígido e são indicados diversos exemplos de aplicações. Assim o movimento não rígido é classificado e, para cada classe resultante, são indicadas as restrições e as condições inerentes e verificados alguns trabalhos realizados no seu âmbito. Como as questões de análise de movimento e modelização da forma se tornam inseparáveis quando se considera o movimento do tipo não rígido, a modelização sugere uma classificação possível da forma não rígida e do movimento. Assim são também apresentados modelos de forma para objectos deformáveis e indicados vários exemplos de aplicações. Com este estudo, de certo modo aprofundado, das várias metodologias, e suas aplicações, existentes no domínio da análise de movimento não rígido, espera-se contribuir para o seu desenvolvimento, dada a actual carência de boas revisões do estado da arte neste domínio.In this article several methodologies actually existent, in the Computer Vision domain, for non-rigid movement analysis are presented and several examples of applications are indicated. Thus the non-rigid movement is classified and, for each resulting class, the restrictions and the inherent conditions are presented and some works accomplished in its ambit are verified. As the questions of movement and shape analysis becomes non-separable when its considered the movement of the non-rigid type, the shape models also suggests a possible classification of the non-rigid shape and of the movement. Thus shape models for deformable objects will be presented and some examples of applications indicated. With this study, in certain way deep, of several methodologies, and its applications, existent in the domain of the non-rigid movement analysis, the authors hope to contribute for its development, given the actual lack of good state of the art revisions in this domain

    A bayesian approach to simultaneously recover camera pose and non-rigid shape from monocular images

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In this paper we bring the tools of the Simultaneous Localization and Map Building (SLAM) problem from a rigid to a deformable domain and use them to simultaneously recover the 3D shape of non-rigid surfaces and the sequence of poses of a moving camera. Under the assumption that the surface shape may be represented as a weighted sum of deformation modes, we show that the problem of estimating the modal weights along with the camera poses, can be probabilistically formulated as a maximum a posteriori estimate and solved using an iterative least squares optimization. In addition, the probabilistic formulation we propose is very general and allows introducing different constraints without requiring any extra complexity. As a proof of concept, we show that local inextensibility constraints that prevent the surface from stretching can be easily integrated. An extensive evaluation on synthetic and real data, demonstrates that our method has several advantages over current non-rigid shape from motion approaches. In particular, we show that our solution is robust to large amounts of noise and outliers and that it does not need to track points over the whole sequence nor to use an initialization close from the ground truth.Peer ReviewedPostprint (author's final draft

    Implicit meshes:unifying implicit and explicit surface representations for 3D reconstruction and tracking

    Get PDF
    This thesis proposes novel ways both to represent the static surfaces, and to parameterize their deformations. This can be used both by automated algorithms for efficient 3–D shape reconstruction, and by graphics designers for editing and animation. Deformable 3–D models can be represented either as traditional explicit surfaces, such as triangulated meshes, or as implicit surfaces. Explicit surfaces are widely accepted because they are simple to deform and render, however fitting them involves minimizing a non-differentiable distance function. By contrast, implicit surfaces allow fitting by minimizing a differentiable algebraic distance, but they are harder to meaningfully deform and render. Here we propose a method that combines the strength of both representations to avoid their drawbacks, and in this way build robust surface representation, called implicit mesh, suitable for automated shape recovery from video sequences. This surface representation lets us automatically detect and exploit silhouette constraints in uncontrolled environments that may involve occlusions and changing or cluttered backgrounds, which limit the applicability of most silhouette based methods. We advocate the use of Dirichlet Free Form Deformation (DFFD) as generic surface deformation technique that can be used to parameterize objects of arbitrary geometry defined as explicit meshes. It is based on the small set of control points and the generalized interpolant. Control points become model parameters and their change causes model's shape modification. Using such parameterization the problem dimensionality can be dramatically reduced, which is desirable property for most optimization algorithms, thus makes DFFD good tool for automated fitting. Combining DFFD as a generic parameterization method for explicit surfaces and implicit meshes as a generic surface representation we obtained a powerfull tool for automated shape recovery from images. However, we also argue that any other avaliable surface parameterization can be used. We demonstrate the applicability of our technique to 3–D reconstruction of the human upper-body including – face, neck and shoulders, and the human ear, from noisy stereo and silhouette data. We also reconstruct the shape of a high resolution human faces parametrized in terms of a Principal Component Analysis model from interest points and automatically detected silhouettes. Tracking of deformable objects using implicit meshes from silhouettes and interest points in monocular sequences is shown in following two examples: Modeling the deformations of a piece of paper represented by an ordinary triangulated mesh; tracking a person's shoulders whose deformations are expressed in terms of Dirichlet Free Form Deformations

    Real-time 3D reconstruction of non-rigid shapes with a single moving camera

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This paper describes a real-time sequential method to simultaneously recover the camera motion and the 3D shape of deformable objects from a calibrated monocular video. For this purpose, we consider the Navier-Cauchy equations used in 3D linear elasticity and solved by finite elements, to model the time-varying shape per frame. These equations are embedded in an extended Kalman filter, resulting in sequential Bayesian estimation approach. We represent the shape, with unknown material properties, as a combination of elastic elements whose nodal points correspond to salient points in the image. The global rigidity of the shape is encoded by a stiffness matrix, computed after assembling each of these elements. With this piecewise model, we can linearly relate the 3D displacements with the 3D acting forces that cause the object deformation, assumed to be normally distributed. While standard finite-element-method techniques require imposing boundary conditions to solve the resulting linear system, in this work we eliminate this requirement by modeling the compliance matrix with a generalized pseudoinverse that enforces a pre-fixed rank. Our framework also ensures surface continuity without the need for a post-processing step to stitch all the piecewise reconstructions into a global smooth shape. We present experimental results using both synthetic and real videos for different scenarios ranging from isometric to elastic deformations. We also show the consistency of the estimation with respect to 3D ground truth data, include several experiments assessing robustness against artifacts and finally, provide an experimental validation of our performance in real time at frame rate for small mapsPeer ReviewedPostprint (author's final draft

    Probabilistic simultaneous pose and non-rigid shape recovery

    Get PDF
    We present an algorithm to simultaneously recover non-rigid shape and camera poses from point correspondences between a reference shape and a sequence of input images. The key novel contribution of our approach is in bringing the tools of the probabilistic SLAM methodology from a rigid to a deformable domain. Under the assumption that the shape may be represented as a weighted sum of deformation modes, we show that the problem of estimating the modal weights along with the camera poses, may be probabilistically formulated as a maximum a posterior estimate and solved using an iterative least squares optimization. An extensive evaluation on synthetic and real data, shows that our approach has several significant advantages over current approaches, such as performing robustly under large amounts of noise and outliers, and neither requiring to track points over the whole sequence nor initializations close from the ground truth solution.Postprint (author’s final draft

    Parameterizing Deformable Surfaces for Monocular 3--D Tracking

    Get PDF
    e propose a deformable surface parameterization that is generic and lets us automatically build registered shape databases. This allows us to directly derive low-dimensional shape models using a simple dimensionality reduction technique. This addresses one of the biggest difficulties in example-based shape modeling: Building the required database, which is often a difficult and painstaking process. We incorporate the resulting models into a monocular tracking system that we use to capture the complex deformations of objects such as sheets of papers or expanding balloons from single video sequences

    Geometry-aware network for non-rigid shape prediction from a single view

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksWe propose a method for predicting the 3D shape of a deformable surface from a single view. By contrast with previous approaches, we do not need a pre-registered template of the surface, and our method is robust to the lack of texture and partial occlusions. At the core of our approach is a {it geometry-aware} deep architecture that tackles the problem as usually done in analytic solutions: first perform 2D detection of the mesh and then estimate a 3D shape that is geometrically consistent with the image. We train this architecture in an end-to-end manner using a large dataset of synthetic renderings of shapes under different levels of deformation, material properties, textures and lighting conditions. We evaluate our approach on a test split of this dataset and available real benchmarks, consistently improving state-of-the-art solutions with a significantly lower computational time.Peer ReviewedPostprint (author's final draft
    • …
    corecore