609 research outputs found

    MoSculp: Interactive Visualization of Shape and Time

    Full text link
    We present a system that allows users to visualize complex human motion via 3D motion sculptures---a representation that conveys the 3D structure swept by a human body as it moves through space. Given an input video, our system computes the motion sculptures and provides a user interface for rendering it in different styles, including the options to insert the sculpture back into the original video, render it in a synthetic scene or physically print it. To provide this end-to-end workflow, we introduce an algorithm that estimates that human's 3D geometry over time from a set of 2D images and develop a 3D-aware image-based rendering approach that embeds the sculpture back into the scene. By automating the process, our system takes motion sculpture creation out of the realm of professional artists, and makes it applicable to a wide range of existing video material. By providing viewers with 3D information, motion sculptures reveal space-time motion information that is difficult to perceive with the naked eye, and allow viewers to interpret how different parts of the object interact over time. We validate the effectiveness of this approach with user studies, finding that our motion sculpture visualizations are significantly more informative about motion than existing stroboscopic and space-time visualization methods.Comment: UIST 2018. Project page: http://mosculp.csail.mit.edu

    3D Volumetric Reconstruction and Characterization of Objects from Uncalibrated Images

    Get PDF
    Three-dimensional (3D) object reconstruction using only bi-dimensional (2D) images has been a major research topic in Computer Vision. However, it is still a complex problem to solve, when automation, speed and precision are required. In the work presented in this paper, we developed a computational platform with the main purpose of building 3D geometric models from uncalibrated images of objects. Simplicity and automation were our major guidelines to choose volumetric reconstruction methods, such as Generalized Voxel Coloring. This method uses photo-consistency measures to build an accurate 3D geometric model, without imposing any kind of restrictions on the relative motion between the camera used and the object to be reconstructed. Our final goal is to use our computational platform in building and characterize human external anatomical shapes using a single off-the-shelf camera

    Shape from inconsistent silhouette: Reconstruction of objects in the presence of segmentation and camera calibration error

    Get PDF
    Silhouettes are useful features to reconstruct the object shape when the object is textureless or the shape classes of objects are unknown. In this dissertation, we explore the problem of reconstructing the shape of challenging objects from silhouettes under real-world conditions such as the presence of silhouette and camera calibration error. This problem is called the Shape from Inconsistent Silhouettes problem. A psuedo-Boolean cost function is formalized for this problem, which penalizes differences between the reconstruction images and the silhouette images, and the Shape from Inconsistent Silhouette problem is cast as a psuedo-Boolean minimization problem. We propose a memory and time efficient method to find a local minimum solution to the optimization problem, including heuristics that take into account the geometric nature of the problem. Our methods are demonstrated on a variety of challenging objects including humans and large, thin objects. We also compare our methods to the state-of-the-art by generating reconstructions of synthetic objects with induced error. ^ We also propose a method for correcting camera calibration error given silhouettes with segmentation error. Unlike other existing methods, our method allows camera calibration error to be corrected without camera placement constraints and allows for silhouette segmentation error. This is accomplished by a modified Iterative Closest Point algorithm which minimizes the difference between an initial reconstruction and the input silhouettes. We characterize the degree of error that can be corrected with synthetic datasets with increasing error, and demonstrate the ability of the camera calibration correction method in improving the reconstruction quality in several challenging real-world datasets

    Parametric region-based foreround segmentation in planar and multi-view sequences

    Get PDF
    Foreground segmentation in video sequences is an important area of the image processing that attracts great interest among the scientist community, since it makes possible the detection of the objects that appear in the sequences under analysis, and allows us to achieve a correct performance of high level applications which use foreground segmentation as an initial step. The current Ph.D. thesis entitled Parametric Region-Based Foreground Segmentation in Planar and Multi-View Sequences details, in the following pages, the research work carried out within this eld. In this investigation, we propose to use parametric probabilistic models at pixel-wise and region level in order to model the di erent classes that are involved in the classi cation process of the di erent regions of the image: foreground, background and, in some sequences, shadow. The development is presented in the following chapters as a generalization of the techniques proposed for objects segmentation in 2D planar sequences to 3D multi-view environment, where we establish a cooperative relationship between all the sensors that are recording the scene. Hence, di erent scenarios have been analyzed in this thesis in order to improve the foreground segmentation techniques: In the first part of this research, we present segmentation methods appropriate for 2D planar scenarios. We start dealing with foreground segmentation in static camera sequences, where a system that combines pixel-wise background model with region-based foreground and shadow models is proposed in a Bayesian classi cation framework. The research continues with the application of this method to moving camera scenarios, where the Bayesian framework is developed between foreground and background classes, both characterized with region-based models, in order to obtain a robust foreground segmentation for this kind of sequences. The second stage of the research is devoted to apply these 2D techniques to multi-view acquisition setups, where several cameras are recording the scene at the same time. At the beginning of this section, we propose a foreground segmentation system for sequences recorded by means of color and depth sensors, which combines di erent probabilistic models created for the background and foreground classes in each one of the views, by taking into account the reliability that each sensor presents. The investigation goes ahead by proposing foreground segregation methods for multi-view smart room scenarios. In these sections, we design two systems where foreground segmentation and 3D reconstruction are combined in order to improve the results of each process. The proposals end with the presentation of a multi-view segmentation system where a foreground probabilistic model is proposed in the 3D space to gather all the object information that appears in the views. The results presented in each one of the proposals show that the foreground segmentation and also the 3D reconstruction can be improved, in these scenarios, by using parametric probabilistic models for modeling the objects to segment, thus introducing the information of the object in a Bayesian classi cation framework.La segmentaci on de objetos de primer plano en secuencias de v deo es una importante area del procesado de imagen que despierta gran inter es por parte de la comunidad cient ca, ya que posibilita la detecci on de objetos que aparecen en las diferentes secuencias en an alisis, y permite el buen funcionamiento de aplicaciones de alto nivel que utilizan esta segmentaci on obtenida como par ametro de entrada. La presente tesis doctoral titulada Parametric Region-Based Foreground Segmentation in Planar and Multi-View Sequences detalla, en las p aginas que siguen, el trabajo de investigaci on desarrollado en este campo. En esta investigaci on se propone utilizar modelos probabil sticos param etricos a nivel de p xel y a nivel de regi on para modelar las diferentes clases que participan en la clasi caci on de las regiones de la imagen: primer plano, fondo y en seg un que secuencias, las regiones de sombra. El desarrollo se presenta en los cap tulos que siguen como una generalizaci on de t ecnicas propuestas para la segmentaci on de objetos en secuencias 2D mono-c amara, al entorno 3D multi-c amara, donde se establece la cooperaci on de los diferentes sensores que participan en la grabaci on de la escena. De esta manera, diferentes escenarios han sido estudiados con el objetivo de mejorar las t ecnicas de segmentaci on para cada uno de ellos: En la primera parte de la investigaci on, se presentan m etodos de segmentaci on para escenarios monoc amara. Concretamente, se comienza tratando la segmentaci on de primer plano para c amara est atica, donde se propone un sistema completo basado en la clasi caci on Bayesiana entre el modelo a nivel de p xel de nido para modelar el fondo, y los modelos a nivel de regi on creados para modelar los objetos de primer plano y la sombra que cada uno de ellos proyecta. La investigaci on prosigue con la aplicaci on de este m etodo a secuencias grabadas mediante c amara en movimiento, donde la clasi caci on Bayesiana se plantea entre las clases de fondo y primer plano, ambas caracterizadas con modelos a nivel de regi on, con el objetivo de obtener una segmentaci on robusta para este tipo de secuencias. La segunda parte de la investigaci on, se centra en la aplicaci on de estas t ecnicas mono-c amara a entornos multi-vista, donde varias c amaras graban conjuntamente la misma escena. Al inicio de dicho apartado, se propone una segmentaci on de primer plano en secuencias donde se combina una c amara de color con una c amara de profundidad en una clasi caci on que combina los diferentes modelos probabil sticos creados para el fondo y el primer plano en cada c amara, a partir de la fi abilidad que presenta cada sensor. La investigaci on prosigue proponiendo m etodos de segmentaci on de primer plano para entornos multi-vista en salas inteligentes. En estos apartados se diseñan dos sistemas donde la segmentaci on de primer plano y la reconstrucci on 3D se combinan para mejorar los resultados de cada uno de estos procesos. Las propuestas fi nalizan con la presentaci on de un sistema de segmentaci on multi-c amara donde se centraliza la informaci on del objeto a segmentar mediante el diseño de un modelo probabil stico 3D. Los resultados presentados en cada uno de los sistemas, demuestran que la segmentacion de primer plano y la reconstrucci on 3D pueden verse mejorados en estos escenarios mediante el uso de modelos probabilisticos param etricos para modelar los objetos a segmentar, introduciendo as la informaci on disponible del objeto en un marco de clasi caci on Bayesiano

    Shape Animation with Combined Captured and Simulated Dynamics

    Get PDF
    We present a novel volumetric animation generation framework to create new types of animations from raw 3D surface or point cloud sequence of captured real performances. The framework considers as input time incoherent 3D observations of a moving shape, and is thus particularly suitable for the output of performance capture platforms. In our system, a suitable virtual representation of the actor is built from real captures that allows seamless combination and simulation with virtual external forces and objects, in which the original captured actor can be reshaped, disassembled or reassembled from user-specified virtual physics. Instead of using the dominant surface-based geometric representation of the capture, which is less suitable for volumetric effects, our pipeline exploits Centroidal Voronoi tessellation decompositions as unified volumetric representation of the real captured actor, which we show can be used seamlessly as a building block for all processing stages, from capture and tracking to virtual physic simulation. The representation makes no human specific assumption and can be used to capture and re-simulate the actor with props or other moving scenery elements. We demonstrate the potential of this pipeline for virtual reanimation of a real captured event with various unprecedented volumetric visual effects, such as volumetric distortion, erosion, morphing, gravity pull, or collisions

    3D Object Reconstruction using Multi-View Calibrated Images

    Get PDF
    In this study, two models are proposed, one is a visual hull model and another one is a 3D object reconstruction model. The proposed visual hull model, which is based on bounding edge representation, obtains high time performance which makes it to be one of the best methods. The main contribution of the proposed visual hull model is to provide bounding surfaces over the bounding edges, which results a complete triangular surface mesh. Moreover, the proposed visual hull model can be computed over the camera networks distributedly. The second model is a depth map based 3D object reconstruction model which results a watertight triangular surface mesh. The proposed model produces the result with acceptable accuracy as well as high completeness, only using stereo matching and triangulation. The contribution of this model is to playing with the 3D points to find the best reliable ones and fitting a surface over them
    • 

    corecore