11 research outputs found

    Structure and motion estimation from apparent contours under circular motion

    Get PDF
    In this paper, we address the problem of recovering structure and motion from the apparent contours of a smooth surface. Fixed image features under circular motion and their relationships with the intrinsic parameters of the camera are exploited to provide a simple parameterization of the fundamental matrix relating any pair of views in the sequence. Such a parameterization allows a trivial initialization of the motion parameters, which all bear physical meaning. It also greatly reduces the dimension of the search space for the optimization problem, which can now be solved using only two epipolar tangents. In contrast to previous methods, the motion estimation algorithm introduced here can cope with incomplete circular motion and more widely spaced images. Existing techniques for model reconstruction from apparent contours are then reviewed and compared. Experiment on real data has been carried out and the 3D model reconstructed from the estimated motion is presented. © 2002 Elsevier Science B.V. All rights reserved.postprin

    A Survey of Methods for Volumetric Scene Reconstruction from Photographs

    Get PDF
    Scene reconstruction, the task of generating a 3D model of a scene given multiple 2D photographs taken of the scene, is an old and difficult problem in computer vision. Since its introduction, scene reconstruction has found application in many fields, including robotics, virtual reality, and entertainment. Volumetric models are a natural choice for scene reconstruction. Three broad classes of volumetric reconstruction techniques have been developed based on geometric intersections, color consistency, and pair-wise matching. Some of these techniques have spawned a number of variations and undergone considerable refinement. This paper is a survey of techniques for volumetric scene reconstruction

    3D Dynamic Scene Reconstruction from Multi-View Image Sequences

    Get PDF
    A confirmation report outlining my PhD research plan is presented. The PhD research topic is 3D dynamic scene reconstruction from multiple view image sequences. Chapter 1 describes the motivation and research aims. An overview of the progress in the past year is included. Chapter 2 is a review of volumetric scene reconstruction techniques and Chapter 3 is an in-depth description of my proposed reconstruction method. The theory behind the proposed volumetric scene reconstruction method is also presented, including topics in projective geometry, camera calibration and energy minimization. Chapter 4 presents the research plan and outlines the future work planned for the next two years

    Reconstruction of Sculpture From Its Profiles With Unknown Camera Positions

    Full text link

    Single View Modeling and View Synthesis

    Get PDF
    This thesis develops new algorithms to produce 3D content from a single camera. Today, amateurs can use hand-held camcorders to capture and display the 3D world in 2D, using mature technologies. However, there is always a strong desire to record and re-explore the 3D world in 3D. To achieve this goal, current approaches usually make use of a camera array, which suffers from tedious setup and calibration processes, as well as lack of portability, limiting its application to lab experiments. In this thesis, I try to produce the 3D contents using a single camera, making it as simple as shooting pictures. It requires a new front end capturing device rather than a regular camcorder, as well as more sophisticated algorithms. First, in order to capture the highly detailed object surfaces, I designed and developed a depth camera based on a novel technique called light fall-off stereo (LFS). The LFS depth camera outputs color+depth image sequences and achieves 30 fps, which is necessary for capturing dynamic scenes. Based on the output color+depth images, I developed a new approach that builds 3D models of dynamic and deformable objects. While the camera can only capture part of a whole object at any instance, partial surfaces are assembled together to form a complete 3D model by a novel warping algorithm. Inspired by the success of single view 3D modeling, I extended my exploration into 2D-3D video conversion that does not utilize a depth camera. I developed a semi-automatic system that converts monocular videos into stereoscopic videos, via view synthesis. It combines motion analysis with user interaction, aiming to transfer as much depth inferring work from the user to the computer. I developed two new methods that analyze the optical flow in order to provide additional qualitative depth constraints. The automatically extracted depth information is presented in the user interface to assist with user labeling work. In this thesis, I developed new algorithms to produce 3D contents from a single camera. Depending on the input data, my algorithm can build high fidelity 3D models for dynamic and deformable objects if depth maps are provided. Otherwise, it can turn the video clips into stereoscopic video

    Multi-view dynamic scene modeling

    Get PDF
    Modeling dynamic scenes/events from multiple fixed-location vision sensors, such as video camcorders, infrared cameras, Time-of-Flight sensors etc, is of broad interest in computer vision society, with many applications including 3D TV, virtual reality, medical surgery, markerless motion capture, video games, and security surveillance. However, most of the existing multi-view systems are set up in a strictly controlled indoor environment, with fixed lighting conditions and simple background views. Many challenges are preventing the technology to an outdoor natural environment. These include varying sunlight, shadows, reflections, background motion and visual occlusion. In this thesis, I address different aspects to overcome all of the aforementioned difficulties, so as to reduce human preparation and manipulation, and to make a robust outdoor system as automatic as possible. In particular, the main novel technical contributions of this thesis are as follows: a generic heterogeneous sensor fusion framework for robust 3D shape estimation together; a way to automatically recover 3D shapes of static occluder from dynamic object silhouette cues, which explicitly models the static visual occluding event along the viewing rays; a system to model multiple dynamic objects shapes and track their identities simultaneously, which explicitly models the inter-occluding event between dynamic objects; a scheme to recover an object's dense 3D motion flow over time, without assuming any prior knowledge of the underlying structure of the dynamic object being modeled, which helps to enforce temporal consistency of natural motions and initializes more advanced shape learning and motion analysis. A unified automatic calibration algorithm for the heterogeneous network of conventional cameras/camcorders and new Time-of-Flight sensors is also proposed

    Production automatique de modèles tridimensionnels par numérisation 3D

    Get PDF
    La numérisation 3D telle que pratiquée aujourd'hui repose essentiellement sur les connaissances de l'opérateur qui la réalise. La qualité des résultats reste très sensible à la procédure utilisée et par conséquent aux compétences de l'opérateur. Ainsi, la numérisation manuelle est très coûteuse en ressources humaines et matérielles et son résultat dépend fortement du niveau de technicité de l'opérateur. Les solutions de numérisation les plus avancées en milieu industriel sont basées sur une approche d'apprentissage nécessitant une adaptation manuelle pour chaque pièce. Ces systèmes sont donc semi-automatiques compte tenu de l'importance de la contribution humaine pour la planification des vues.Mon projet de thèse se focalise sur la définition d'un procédé de numérisation 3D automatique et intelligente. Ce procédé est présenté sous forme d'une séquence de processus qui sont la planification de vues, la planification de trajectoires, l'acquisition et les post-traitements des données acquises. L'originalité de notre démarche de numérisation est qu'elle est générique parce qu'elle n'est pas liée aux outils et méthodes utilisés pour la réalisation des tâches liées à chaque processus. Nous avons également développé trois méthodes de planification de vues pour la numérisation d'objets sans connaissance a priori de leurs formes. Ces méthodes garantissent une indépendance des résultats par rapport au savoir-faire de l'opérateur. L'originalité de ces approches est qu'elles sont applicables à tous types de scanners. Nous avons implanté ces méthodes sur une cellule de numérisation robotisée. Nos approches assurent une reconstruction progressive et intelligente d'un large panel d'objets de différentes classes de complexité en déplaçant efficacement le scannerThe manual 3D digitization process is expensive since it requires a highly trained technician who decides about the different views needed to acquire the object model. The quality of the final result strongly depends, in addition to the complexity of the object shape, on the selected viewpoints and thus on the human expertise. Nowadays, the most developed digitization strategies in industry are based on a teaching approach in which a human operator manually determines one set of poses for the ranging device. The main drawback of this methodology is the influence of the operator's expertise. Moreover, this technique does not fulfill the high level requirement of industrial applications which require reliable, repeatable, and fast programming routines.My thesis project focuses on the definition of a procedure for automatic and intelligent 3D digitization. This procedure is presented as a sequence of processes that are essentially the view planning, the motion planning, the acquisition and the post-processing of the acquired data. The advantage of our procedure is that it is generic since it is not performed for a specific scanning system. Moreover, it is not dependent on the methods used to perform the tasks associated with each elementary process. We also developed three view planning methods to generate a complete 3D model of unknown and complex objects that we implemented on a robotic cell. Our methods enable fast and complete 3D reconstruction while moving efficiently the scanner. Additionaly, our approaches are applicable to all kinds of range sensors.DIJON-BU Doc.électronique (212319901) / SudocSudocFranceF

    Active modelling of virtual humans

    Get PDF
    This thesis provides a complete framework that enables the creation of photorealistic 3D human models in real-world environments. The approach allows a non-expert user to use any digital capture device to obtain four images of an individual and create a personalised 3D model, for multimedia applications. To achieve this, it is necessary that the system is automatic and that the reconstruction process is flexible to account for information that is not available or incorrectly captured. In this approach the individual is automatically extracted from the environment using constrained active B-spline templates that are scaled and automatically initialised using only image information. These templates incorporate the energy minimising framework for Active Contour Models, providing a suitable and flexible method to deal with the adjustments in pose an individual can adopt. The final states of the templates describe the individual’s shape. The contours in each view are combined to form a 3D B-spline surface that characterises an individual’s maximal silhouette equivalent. The surface provides a mould that contains sufficient information to allow for the active deformation of an underlying generic human model. This modelling approach is performed using a novel technique that evolves active-meshes to 3D for deforming the underlying human model, while adaptively constraining it to preserve its existing structure. The active-mesh approach incorporates internal constraints that maintain the structural relationship of the vertices of the human model, while external forces deform the model congruous to the 3D surface mould. The strength of the internal constraints can be reduced to allow the model to adopt the exact shape of the bounding volume or strengthened to preserve the internal structure, particularly in areas of high detail. This novel implementation provides a uniform framework that can be simply and automatically applied to the entire human model
    corecore