2,671 research outputs found

    Real-time 3D reconstruction of non-rigid shapes with a single moving camera

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This paper describes a real-time sequential method to simultaneously recover the camera motion and the 3D shape of deformable objects from a calibrated monocular video. For this purpose, we consider the Navier-Cauchy equations used in 3D linear elasticity and solved by finite elements, to model the time-varying shape per frame. These equations are embedded in an extended Kalman filter, resulting in sequential Bayesian estimation approach. We represent the shape, with unknown material properties, as a combination of elastic elements whose nodal points correspond to salient points in the image. The global rigidity of the shape is encoded by a stiffness matrix, computed after assembling each of these elements. With this piecewise model, we can linearly relate the 3D displacements with the 3D acting forces that cause the object deformation, assumed to be normally distributed. While standard finite-element-method techniques require imposing boundary conditions to solve the resulting linear system, in this work we eliminate this requirement by modeling the compliance matrix with a generalized pseudoinverse that enforces a pre-fixed rank. Our framework also ensures surface continuity without the need for a post-processing step to stitch all the piecewise reconstructions into a global smooth shape. We present experimental results using both synthetic and real videos for different scenarios ranging from isometric to elastic deformations. We also show the consistency of the estimation with respect to 3D ground truth data, include several experiments assessing robustness against artifacts and finally, provide an experimental validation of our performance in real time at frame rate for small mapsPeer ReviewedPostprint (author's final draft

    Shape basis interpretation for monocular deformable 3D reconstruction

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper, we propose a novel interpretable shape model to encode object non-rigidity. We first use the initial frames of a monocular video to recover a rest shape, used later to compute a dissimilarity measure based on a distance matrix measurement. Spectral analysis is then applied to this matrix to obtain a reduced shape basis, that in contrast to existing approaches, can be physically interpreted. In turn, these pre-computed shape bases are used to linearly span the deformation of a wide variety of objects. We introduce the low-rank basis into a sequential approach to recover both camera motion and non-rigid shape from the monocular video, by simply optimizing the weights of the linear combination using bundle adjustment. Since the number of parameters to optimize per frame is relatively small, specially when physical priors are considered, our approach is fast and can potentially run in real time. Validation is done in a wide variety of real-world objects, undergoing both inextensible and extensible deformations. Our approach achieves remarkable robustness to artifacts such as noisy and missing measurements and shows an improved performance to competing methods.Peer ReviewedPostprint (author's final draft

    Deformable Prototypes for Encoding Shape Categories in Image Databases

    Full text link
    We describe a method for shape-based image database search that uses deformable prototypes to represent categories. Rather than directly comparing a candidate shape with all shape entries in the database, shapes are compared in terms of the types of nonrigid deformations (differences) that relate them to a small subset of representative prototypes. To solve the shape correspondence and alignment problem, we employ the technique of modal matching, an information-preserving shape decomposition for matching, describing, and comparing shapes despite sensor variations and nonrigid deformations. In modal matching, shape is decomposed into an ordered basis of orthogonal principal components. We demonstrate the utility of this approach for shape comparison in 2-D image databases.Office of Naval Research (Young Investigator Award N00014-06-1-0661

    Planning Framework for Robotic Pizza Dough Stretching with a Rolling Pin

    Get PDF
    Stretching a pizza dough with a rolling pin is a nonprehensile manipulation. Since the object is deformable, force closure cannot be established, and the manipulation is carried out in a nonprehensile way. The framework of this pizza dough stretching application that is explained in this chapter consists of four sub-procedures: (i) recognition of the pizza dough on a plate, (ii) planning the necessary steps to shape the pizza dough to the desired form, (iii) path generation for a rolling pin to execute the output of the pizza dough planner, and (iv) inverse kinematics for the bi-manual robot to grasp and control the rolling pin properly. Using the deformable object model described in Chap. 3, each sub-procedure of the proposed framework is explained sequentially

    Taking aim at moving targets in computational cell migration

    Get PDF
    Cell migration is central to the development and maintenance of multicellular organisms. Fundamental understanding of cell migration can, for example, direct novel therapeutic strategies to control invasive tumor cells. However, the study of cell migration yields an overabundance of experimental data that require demanding processing and analysis for results extraction. Computational methods and tools have therefore become essential in the quantification and modeling of cell migration data. We review computational approaches for the key tasks in the quantification of in vitro cell migration: image pre-processing, motion estimation and feature extraction. Moreover, we summarize the current state-of-the-art for in silico modeling of cell migration. Finally, we provide a list of available software tools for cell migration to assist researchers in choosing the most appropriate solution for their needs

    Bioimage informatics in the context of drosophila research

    Get PDF
    Modern biological research relies heavily on microscopic imaging. The advanced genetic toolkit of drosophila makes it possible to label molecular and cellular components with unprecedented level of specificity necessitating the application of the most sophisticated imaging technologies. Imaging in drosophila spans all scales from single molecules to the entire populations of adult organisms, from electron microscopy to live imaging of developmental processes. As the imaging approaches become more complex and ambitious, there is an increasing need for quantitative, computer-mediated image processing and analysis to make sense of the imagery. Bioimage informatics is an emerging research field that covers all aspects of biological image analysis from data handling, through processing, to quantitative measurements, analysis and data presentation. Some of the most advanced, large scale projects, combining cutting edge imaging with complex bioimage informatics pipelines, are realized in the drosophila research community. In this review, we discuss the current research in biological image analysis specifically relevant to the type of systems level image datasets that are uniquely available for the drosophila model system. We focus on how state-of-the-art computer vision algorithms are impacting the ability of drosophila researchers to analyze biological systems in space and time. We pay particular attention to how these algorithmic advances from computer science are made usable to practicing biologists through open source platforms and how biologists can themselves participate in their further development

    Projection-based visualization of tangential deformation of nonrigid surface by deformation estimation using infrared texture

    Full text link
    In this paper, we propose a projection-based mixed reality system that visualizes the tangential deformation of a nonrigid surface by superimposing graphics directly onto the surface by projected imagery. The superimposed graphics are deformed according to the surface deformation. To achieve this goal, we develop a computer vision technique that estimates the tangential deformation by measuring the frame-by-frame movement of an infrared (IR) texture on the surface. IR ink, which can be captured by an IR camera under IR light, but is invisible to the human eye, is used to provide the surface texture. Consequently, the texture does not degrade the image quality of the augmented graphics. The proposed technique measures individually the surface motion between two successive frames. Therefore, it does not suffer from occlusions caused by interactions and allows touching, pushing, pulling, and pinching, etc. The moving least squares technique interpolates the measured result to estimate denser surface deformation. The proposed method relies only on the apparent motion measurement; thus, it is not limited to a specific deformation characteristic, but is flexible for multiple deformable materials, such as viscoelastic and elastic materials. Experiments confirm that, with the proposed method, we can visualize the surface deformation of various materials by projected illumination, even when the user’s hand occludes the surface from the camera.Punpongsanon, P., Iwai, D. & Sato, K. Projection-based visualization of tangential deformation of nonrigid surface by deformation estimation using infrared texture. Virtual Reality 19, 45–56 (2015). https://doi.org/10.1007/s10055-014-0256-y.This is a post-peer-review, pre-copyedit version of an article published in Virtual Reality. The final authenticated version is available online at: https://doi.org/10.1007/s10055-014-0256-y
    • …
    corecore