361 research outputs found

    Surface-bounded growth modeling applied to human mandibles

    Get PDF
    From a set of longitudinal three-dimensional scans of the same anatomical structure, we have accurately modeled the temporal shape and size changes using a linear shape model. On a total of 31 computed tomography scans of the mandible from six patients, 14851 semilandmarks are found automatically using shape features and a new algorithm called geometry-constrained diffusion. The semilandmarks are mapped into Procrustes space. Principal component analysis extracts a one-dimensional subspace, which is used to construct a linear growth model. The worst case mean modeling error in a cross validation study is 3.7 mm

    Multiframe Temporal Estimation of Cardiac Nonrigid Motion

    Get PDF
    A robust, flexible system for tracking the point to point nonrigid motion of the left ventricular (LV) endocardial wall in image sequences has been developed. This system is unique in its ability to model motion trajectories across multiple frames. The foundation of this system is an adaptive transversal filter based on the recursive least-squares algorithm. This filter facilitates the integration of models for periodicity and proximal smoothness as appropriate using a contour-based description of the object’s boundaries. A set of correspondences between contours and an associated set of correspondence quality measures comprise the input to the system. Frame-to-frame relationships from two different frames of reference are derived and analyzed using synthetic and actual images. Two multiframe temporal models, both based on a sum of sinusoids, are derived. Illustrative examples of the system’s output are presented for quantitative analysis. Validation of the system is performed by comparing computed trajectory estimates with the trajectories of physical markers implanted in the LV wall. Sample case studies of marker trajectory comparisons are presented. Ensemble statistics from comparisons with 15 marker trajectories are acquired and analyzed. A multiframe temporal model without spatial periodicity constraints was determined to provide excellent performance with the least computational cost. A multiframe spatiotemporal model provided the best performance based on statistical standard deviation, although at significant computational expense.National Heart, Lung, and Blood InstituteAir Force of Scientific ResearchNational Science FoundationOffice of Naval ResearchR01HL44803F49620-99-1-0481F49620-99-1-0067MIP-9615590N00014-98-1-054

    SurfNet: Generating 3D shape surfaces using deep residual networks

    Full text link
    3D shape models are naturally parameterized using vertices and faces, \ie, composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent `geometry images' representing the shape surface of a category of 3D objects. We then use this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces and reconstruct 3D shape surfaces from previously unseen images.Comment: CVPR 2017 pape

    HIGH QUALITY HUMAN 3D BODY MODELING, TRACKING AND APPLICATION

    Get PDF
    Geometric reconstruction of dynamic objects is a fundamental task of computer vision and graphics, and modeling human body of high fidelity is considered to be a core of this problem. Traditional human shape and motion capture techniques require an array of surrounding cameras or subjects wear reflective markers, resulting in a limitation of working space and portability. In this dissertation, a complete process is designed from geometric modeling detailed 3D human full body and capturing shape dynamics over time using a flexible setup to guiding clothes/person re-targeting with such data-driven models. As the mechanical movement of human body can be considered as an articulate motion, which is easy to guide the skin animation but has difficulties in the reverse process to find parameters from images without manual intervention, we present a novel parametric model, GMM-BlendSCAPE, jointly taking both linear skinning model and the prior art of BlendSCAPE (Blend Shape Completion and Animation for PEople) into consideration and develop a Gaussian Mixture Model (GMM) to infer both body shape and pose from incomplete observations. We show the increased accuracy of joints and skin surface estimation using our model compared to the skeleton based motion tracking. To model the detailed body, we start with capturing high-quality partial 3D scans by using a single-view commercial depth camera. Based on GMM-BlendSCAPE, we can then reconstruct multiple complete static models of large pose difference via our novel non-rigid registration algorithm. With vertex correspondences established, these models can be further converted into a personalized drivable template and used for robust pose tracking in a similar GMM framework. Moreover, we design a general purpose real-time non-rigid deformation algorithm to accelerate this registration. Last but not least, we demonstrate a novel virtual clothes try-on application based on our personalized model utilizing both image and depth cues to synthesize and re-target clothes for single-view videos of different people

    Deformable Shape Completion with Graph Convolutional Autoencoders

    Full text link
    The availability of affordable and portable depth sensors has made scanning objects and people simpler than ever. However, dealing with occlusions and missing parts is still a significant challenge. The problem of reconstructing a (possibly non-rigidly moving) 3D object from a single or multiple partial scans has received increasing attention in recent years. In this work, we propose a novel learning-based method for the completion of partial shapes. Unlike the majority of existing approaches, our method focuses on objects that can undergo non-rigid deformations. The core of our method is a variational autoencoder with graph convolutional operations that learns a latent space for complete realistic shapes. At inference, we optimize to find the representation in this latent space that best fits the generated shape to the known partial input. The completed shape exhibits a realistic appearance on the unknown part. We show promising results towards the completion of synthetic and real scans of human body and face meshes exhibiting different styles of articulation and partiality.Comment: CVPR 201

    Finite element surface registration incorporating curvature, volume preservation, and statistical model information

    Get PDF
    We present a novel method for nonrigid registration of 3D surfaces and images. The method can be used to register surfaces by means of their distance images, or to register medical images directly. It is formulated as a minimization problem of a sum of several terms representing the desired properties of a registration result: smoothness, volume preservation, matching of the surface, its curvature, and possible other feature images, as well as consistency with previous registration results of similar objects, represented by a statistical deformation model. While most of these concepts are already known, we present a coherent continuous formulation of these constraints, including the statistical deformation model. This continuous formulation renders the registration method independent of its discretization. The finite element discretization we present is, while independent of the registration functional, the second main contribution of this paper. The local discontinuous Galerkin method has not previously been used in image registration, and it provides an efficient and general framework to discretize each of the terms of our functional. Computational efficiency and modest memory consumption are achieved thanks to parallelization and locally adaptive mesh refinement. This allows for the first time the use of otherwise prohibitively large 3D statistical deformation models

    Frequency-based Non-rigid Motion Analysis: Application to Four Dimensional Medical Images

    Get PDF
    International audienceWe present a method for nonrigid motion analysis in time sequences of volume images (4D data). In this method, nonrigid motion of the deforming object contour is dynamically approximated by a physically-based deformable surface. In order to reduce the number of parameters describing the deformation, we make use of a modal analysis which provides a spatial smoothing of the surface. The deformation spectrum, which outlines the main excited modes, can be efficiently used for deformation comparison. Fourier analysis on time signals of the main deformation spectrum components provides a ternporal smoothing of the data. Thus a complex nonrigid deformation is described by only a few parameters: the main excited modes and the main Fourier harmonics. Therefore, 4D data can be analyzed in a very concise manner. The power and robustness of the approach is illustrated by various results on medical data. We believe that our method has important applications in automatic diagnosis of heart diseases and in motion compression

    Doctor of Philosophy

    Get PDF
    dissertationShape analysis is a well-established tool for processing surfaces. It is often a first step in performing tasks such as segmentation, symmetry detection, and finding correspondences between shapes. Shape analysis is traditionally employed on well-sampled surfaces where the geometry and topology is precisely known. When the form of the surface is that of a point cloud containing nonuniform sampling, noise, and incomplete measurements, traditional shape analysis methods perform poorly. Although one may first perform reconstruction on such a point cloud prior to performing shape analysis, if the geometry and topology is far from the true surface, then this can have an adverse impact on the subsequent analysis. Furthermore, for triangulated surfaces containing noise, thin sheets, and poorly shaped triangles, existing shape analysis methods can be highly unstable. This thesis explores methods of shape analysis applied directly to such defect-laden shapes. We first study the problem of surface reconstruction, in order to obtain a better understanding of the types of point clouds for which reconstruction methods contain difficulties. To this end, we have devised a benchmark for surface reconstruction, establishing a standard for measuring error in reconstruction. We then develop a new method for consistently orienting normals of such challenging point clouds by using a collection of harmonic functions, intrinsically defined on the point cloud. Next, we develop a new shape analysis tool which is tolerant to imperfections, by constructing distances directly on the point cloud defined as the likelihood of two points belonging to a mutually common medial ball, and apply this for segmentation and reconstruction. We extend this distance measure to define a diffusion process on the point cloud, tolerant to missing data, which is used for the purposes of matching incomplete shapes undergoing a nonrigid deformation. Lastly, we have developed an intrinsic method for multiresolution remeshing of a poor-quality triangulated surface via spectral bisection

    Multi-Level Shape Representation Using Global Deformations and Locally Adaptive Finite Elements

    Get PDF
    We present a model-based method for the multi-level shape, pose estimation and abstraction of an object’s surface from range data. The surface shape is estimated based on the parameters of a superquadric that is subjected to global deformations (tapering and bending) and a varying number of levels of local deformations. Local deformations are implemented using locally adaptive finite elements whose shape functions are piecewise cubic functions with C1 continuity. The surface pose is estimated based on the model\u27s translational and rotational degrees of freedom. The algorithm first does a coarse fit, solving for a first approximation to the translation, rotation and global deformation parameters and then does several passes of mesh refinement, by locally subdividing triangles based on the distance between the given datapoints and the model. The adaptive finite element algorithm ensures that during subdivision the desirable finite element mesh generation properties of conformity, non-degeneracy and smoothness are maintained. Each pass of the algorithm uses physics-based modeling techniques to iteratively adjust the global and local parameters of the model in response to forces that are computed from approximation errors between the model and the data. We present results demonstrating the multi-level shape representation for both sparse and dense range data

    Variational segmentation problems using prior knowledge in imaging and vision

    Get PDF
    • …
    corecore