799 research outputs found

    A bayesian approach to simultaneously recover camera pose and non-rigid shape from monocular images

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In this paper we bring the tools of the Simultaneous Localization and Map Building (SLAM) problem from a rigid to a deformable domain and use them to simultaneously recover the 3D shape of non-rigid surfaces and the sequence of poses of a moving camera. Under the assumption that the surface shape may be represented as a weighted sum of deformation modes, we show that the problem of estimating the modal weights along with the camera poses, can be probabilistically formulated as a maximum a posteriori estimate and solved using an iterative least squares optimization. In addition, the probabilistic formulation we propose is very general and allows introducing different constraints without requiring any extra complexity. As a proof of concept, we show that local inextensibility constraints that prevent the surface from stretching can be easily integrated. An extensive evaluation on synthetic and real data, demonstrates that our method has several advantages over current non-rigid shape from motion approaches. In particular, we show that our solution is robust to large amounts of noise and outliers and that it does not need to track points over the whole sequence nor to use an initialization close from the ground truth.Peer ReviewedPostprint (author's final draft

    Recognition of nonmanual markers in American Sign Language (ASL) using non-parametric adaptive 2D-3D face tracking

    Full text link
    This paper addresses the problem of automatically recognizing linguistically significant nonmanual expressions in American Sign Language from video. We develop a fully automatic system that is able to track facial expressions and head movements, and detect and recognize facial events continuously from video. The main contributions of the proposed framework are the following: (1) We have built a stochastic and adaptive ensemble of face trackers to address factors resulting in lost face track; (2) We combine 2D and 3D deformable face models to warp input frames, thus correcting for any variation in facial appearance resulting from changes in 3D head pose; (3) We use a combination of geometric features and texture features extracted from a canonical frontal representation. The proposed new framework makes it possible to detect grammatically significant nonmanual expressions from continuous signing and to differentiate successfully among linguistically significant expressions that involve subtle differences in appearance. We present results that are based on the use of a dataset containing 330 sentences from videos that were collected and linguistically annotated at Boston University

    Real Time Sequential Non Rigid Structure from motion using a single camera

    Get PDF
    En la actualidad las aplicaciones que basan su funcionamiento en una correcta localización y reconstrucción dentro de un entorno real en 3D han experimentado un gran interés en los últimos años, tanto por la comunidad investigadora como por la industrial. Estas aplicaciones varían desde la realidad aumentada, la robótica, la simulación, los videojuegos, etc. Dependiendo de la aplicación y del nivel de detalle de la reconstrucción, se emplean diversos dispositivos, algunos específicos, más complejos y caros como las cámaras estéreo, cámara y profundidad (RGBD) con Luz estructurada y Time of Flight (ToF), así como láser y otros más avanzados. Para aplicaciones sencillas es suficiente con dispositivos de uso común, como los smartphones, en los que aplicando técnicas de visión artificial, se pueden obtener modelos 3D del entorno para, en el caso de la realidad aumentada, mostrar información aumentada en la ubicación seleccionada.En robótica, la localización y generación simultáneas de un mapa del entorno en 3D es una tarea fundamental para conseguir la navegación autónoma. Este problema se conoce en el estado del arte como Simultaneous Localization And Mapping (SLAM) o Structure from Motion (SfM). Para la aplicación de estas técnicas, el objeto no ha de cambiar su forma a lo largo del tiempo. La reconstrucción es unívoca salvo factor de escala en captura monocular sin referencia. Si la condición de rigidez no se cumple, es porque la forma del objeto cambia a lo largo del tiempo. El problema sería equivalente a realizar una reconstrucción por fotograma, lo cual no se puede hacer de manera directa, puesto que diferentes formas, combinadas con diferentes poses de cámara pueden dar proyecciones similares. Es por esto que el campo de la reconstrucción de objetos deformables es todavía un área en desarrollo. Los métodos de SfM se han adaptado aplicando modelos físicos, restricciones temporales, espaciales, geométricas o de otros tipos para reducir la ambigüedad en las soluciones, naciendo así las técnicas conocidas como Non-Rigid SfM (NRSfM).En esta tesis se propone partir de una técnica de reconstrucción rígida bien conocida en el estado del arte como es PTAM (Parallel Tracking and Mapping) y adaptarla para incluir técnicas de NRSfM, basadas en modelo de bases lineales para estimar las deformaciones del objeto modelado dinámicamente y aplicar restricciones temporales y espaciales para mejorar las reconstrucciones, además de ir adaptándose a cambios de deformación que se presenten en la secuencia. Para ello, hay que realizar cambios de manera que cada uno de sus hilos de ejecución procesen datos no rígidos.El hilo encargado del seguimiento ya realizaba seguimiento basado en un mapa de puntos 3D, proporcionado a priori. La modificación más importante aquí es la integración de un modelo de deformación lineal para que se realice el cálculo de la deformación del objeto en tiempo real, asumiendo fijas las formas básicas de deformación. El cálculo de la pose de la cámara está basado en el sistema de estimación rígido, por lo que la estimación de pose y coeficientes de deformación se hace de manera alternada usando el algoritmo E-M (Expectation-Maximization). También, se imponen restricciones temporales y de forma para restringir las ambigüedades inherentes en las soluciones y mejorar la calidad de la estimación 3D.Respecto al hilo que gestiona el mapa, se actualiza en función del tiempo para que sea capaz de mejorar las bases de deformación cuando éstas no son capaces de explicar las formas que se ven en las imágenes actuales. Para ello, se sustituye la optimización de modelo rígido incluida en este hilo por un método de procesamiento exhaustivo NRSfM, para mejorar las bases acorde a las imágenes con gran error de reconstrucción desde el hilo de seguimiento. Con esto, el modelo se consigue adaptar a nuevas deformaciones, permitiendo al sistema evolucionar y ser estable a largo plazo.A diferencia de una gran parte de los métodos de la literatura, el sistema propuesto aborda el problema de la proyección perspectiva de forma nativa, minimizando los problemas de ambigüedad y de distancia al objeto existente en la proyección ortográfica. El sistema propuesto maneja centenares de puntos y está preparado para cumplir con restricciones de tiempo real para su aplicación en sistemas con recursos hardware limitados

    The Alignment Between 3-D Data and Articulated Shapes with Bending Surfaces

    Get PDF
    International audienceIn this paper we address the problem of aligning 3-D data with articulated shapes. This problem resides at the core of many motion tracking methods with applications in human motion capture, action recognition, medical-image analysis, etc. We describe an articulated and bending surface representation well suited for this task as well as a method which aligns (or registers) such a surface to 3-D data. Articulated objects, e.g., humans and animals, are covered with clothes and skin which may be seen as textured surfaces. These surfaces are both articulated and deformable and one realistic way to model them is to assume that they bend in the neighborhood of the shape's joints. We will introduce a surface-bending model as a function of the articulated-motion parameters. This combined articulated-motion and surface-bending model better predicts the observed phenomena in the data and therefore is well suited for surface registration. Given a set of sparse 3-D data (gathered with a stereo camera pair) and a textured, articulated, and bending surface, we describe a register-and-fit method that proceeds as follows. First, the data-to-surface registration problem is formalized as a classifier and is carried out using an EM algorithm. Second, the data-to-surface fitting problem is carried out by minimizing the distance from the registered data points to the surface over the joint variables. In order to illustrate the method we applied it to the problem of hand tracking. A hand model with 27 degrees of freedom is successfully registered and fitted to a sequence of 3-D data points gathered with a stereo camera pair

    On the Calibration of Active Binocular and RGBD Vision Systems for Dual-Arm Robots

    Get PDF
    This paper describes a camera and hand-eye calibration methodology for integrating an active binocular robot head within a dual-arm robot. For this purpose, we derive the forward kinematic model of our active robot head and describe our methodology for calibrating and integrating our robot head. This rigid calibration provides a closedform hand-to-eye solution. We then present an approach for updating dynamically camera external parameters for optimal 3D reconstruction that are the foundation for robotic tasks such as grasping and manipulating rigid and deformable objects. We show from experimental results that our robot head achieves an overall sub millimetre accuracy of less than 0.3 millimetres while recovering the 3D structure of a scene. In addition, we report a comparative study between current RGBD cameras and our active stereo head within two dual-arm robotic testbeds that demonstrates the accuracy and portability of our proposed methodology

    Live Texturing of Augmented Reality Characters from Colored Drawings

    Get PDF
    Coloring books capture the imagination of children and provide them with one of their earliest opportunities for creative expression. However, given the proliferation and popularity of digital devices, real-world activities like coloring can seem unexciting, and children become less engaged in them. Augmented reality holds unique potential to impact this situation by providing a bridge between real-world activities and digital enhancements. In this paper, we present an augmented reality coloring book App in which children color characters in a printed coloring book and inspect their work using a mobile device. The drawing is detected and tracked, and the video stream is augmented with an animated 3-D version of the character that is textured according to the child's coloring. This is possible thanks to several novel technical contributions. We present a texturing process that applies the captured texture from a 2-D colored drawing to both the visible and occluded regions of a 3-D character in real time. We develop a deformable surface tracking method designed for colored drawings that uses a new outlier rejection algorithm for real-time tracking and surface deformation recovery. We present a content creation pipeline to efficiently create the 2-D and 3-D content. And, finally, we validate our work with two user studies that examine the quality of our texturing algorithm and the overall App experience

    Template-based Monocular 3-D Shape Reconstruction And Tracking Using Laplacian Meshes

    Get PDF
    This thesis addresses the problem of recovering the 3-D shape of a deformable object in single images, or image sequences acquired by a monocular video camera, given that a 3-D template shape and a template image of the object are available. While being a very challenging problem in computer vision, being able to reconstruct and track 3-D deformable objects in videos allows us to develop many potential applications ranging from sports and entertainments to engineering and medical imaging. This thesis extends the scope of deformable object modeling to real-world applications of fully 3-D modeling of deformable objects from video streams with a number of contributions. We show that by extending the Laplacian formalism, which was first introduced in the Graphics community to regularize 3-D meshes, we can turn the monocular 3-D shape reconstruction of a deformable object given correspondences with a reference image into a much better-posed problem with far fewer degrees of freedom than the original one. This has proved key to achieving real-time performance while preserving both sufficient flexibility and robustness. Our real-time 3-D reconstruction and tracking system of deformable objects can very quickly reject outlier correspondences and accurately reconstruct the object shape in 3D. Frame-to-frame tracking is exploited to track the object under difficult settings such as large deformations, occlusions, illumination changes, and motion blur. We present an approach to solving the problem of dense image registration and 3-D shape reconstruction of deformable objects in the presence of occlusions and minimal texture. A main ingredient is the pixel-wise relevancy score that we use to weigh the influence of the image information from a pixel in the image energy cost function. A careful design of the framework is essential for obtaining state-of-the-art results in recovering 3-D deformations of both well- and poorly-textured objects in the presence of occlusions. We study the problem of reconstructing 3-D deformable objects interacting with rigid ones. Imposing real physical constraints allows us to model the interactions of objects in the real world more accurately and more realistically. In particular, we study the problem of a ball colliding with a bat observed by high speed cameras. We provide quantitative measurements of the impact that are compared with simulation-based methods to evaluate which simulation predictions most accurately describe a physical quantity of interest and to improve the models. Based on the diffuse property of the tracked deformable object, we propose a method to estimate the environment irradiance map represented by a set of low frequency spherical harmonics. The obtained irradiance map can be used to realistically illuminate 2-D and 3-D virtual contents in the context of augmented reality on deformable objects. The results compare favorably with baseline methods. In collaboration with Disney Research, we develop an augmented reality coloring book application that runs in real-time on mobile devices. The app allows the children to see the coloring work by showing animated characters with texture lifted from their colors on the drawing. Deformations of the book page are explicitly modeled by our 3-D tracking and reconstruction method. As a result, accurate color information is extracted to synthesize the character's texture

    Spatiotemporal alignment of in utero BOLD-MRI series

    Get PDF
    Purpose: To present a method for spatiotemporal alignment of in-utero magnetic resonance imaging (MRI) time series acquired during maternal hyperoxia for enabling improved quantitative tracking of blood oxygen level-dependent (BOLD) signal changes that characterize oxygen transport through the placenta to fetal organs. Materials and Methods: The proposed pipeline for spatiotemporal alignment of images acquired with a single-shot gradient echo echo-planar imaging includes 1) signal nonuniformity correction, 2) intravolume motion correction based on nonrigid registration, 3) correction of motion and nonrigid deformations across volumes, and 4) detection of the outlier volumes to be discarded from subsequent analysis. BOLD MRI time series collected from 10 pregnant women during 3T scans were analyzed using this pipeline. To assess pipeline performance, signal fluctuations between consecutive timepoints were examined. In addition, volume overlap and distance between manual region of interest (ROI) delineations in a subset of frames and the delineations obtained through propagation of the ROIs from the reference frame were used to quantify alignment accuracy. A previously demonstrated rigid registration approach was used for comparison. Results: The proposed pipeline improved anatomical alignment of placenta and fetal organs over the state-of-the-art rigid motion correction methods. In particular, unexpected temporal signal fluctuations during the first normoxia period were significantly decreased (P < 0.01) and volume overlap and distance between region boundaries measures were significantly improved (P < 0.01). Conclusion: The proposed approach to align MRI time series enables more accurate quantitative studies of placental function by improving spatiotemporal alignment across placenta and fetal organs.National Institutes of Health (NIH) . Grant Numbers: U01 HD087211 , R01 EB017337 Consejeria de Educacion, Juventud y Deporte de la Comunidad de Madrid (Spain) through the Madrid-MIT M+Vision Consortium

    MBW: Multi-view Bootstrapping in the Wild

    Full text link
    Labeling articulated objects in unconstrained settings have a wide variety of applications including entertainment, neuroscience, psychology, ethology, and many fields of medicine. Large offline labeled datasets do not exist for all but the most common articulated object categories (e.g., humans). Hand labeling these landmarks within a video sequence is a laborious task. Learned landmark detectors can help, but can be error-prone when trained from only a few examples. Multi-camera systems that train fine-grained detectors have shown significant promise in detecting such errors, allowing for self-supervised solutions that only need a small percentage of the video sequence to be hand-labeled. The approach, however, is based on calibrated cameras and rigid geometry, making it expensive, difficult to manage, and impractical in real-world scenarios. In this paper, we address these bottlenecks by combining a non-rigid 3D neural prior with deep flow to obtain high-fidelity landmark estimates from videos with only two or three uncalibrated, handheld cameras. With just a few annotations (representing 1-2% of the frames), we are able to produce 2D results comparable to state-of-the-art fully supervised methods, along with 3D reconstructions that are impossible with other existing approaches. Our Multi-view Bootstrapping in the Wild (MBW) approach demonstrates impressive results on standard human datasets, as well as tigers, cheetahs, fish, colobus monkeys, chimpanzees, and flamingos from videos captured casually in a zoo. We release the codebase for MBW as well as this challenging zoo dataset consisting image frames of tail-end distribution categories with their corresponding 2D, 3D labels generated from minimal human intervention.Comment: NeurIPS 2022 conference. Project webpage and code: https://github.com/mosamdabhi/MB
    corecore