79 research outputs found

    Dynamic shape capture using multi-view photometric stereo

    Full text link

    Reconstruction and analysis of dynamic shapes

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 122-141).Motion capture has revolutionized entertainment and influenced fields as diverse as the arts, sports, and medicine. This is despite the limitation that it tracks only a small set of surface points. On the other hand, 3D scanning techniques digitize complete surfaces of static objects, but are not applicable to moving shapes. I present methods that overcome both limitations, and can obtain the moving geometry of dynamic shapes (such as people and clothes in motion) and analyze it in order to advance computer animation. Further understanding of dynamic shapes will enable various industries to enhance virtual characters, advance robot locomotion, improve sports performance, and aid in medical rehabilitation, thus directly affecting our daily lives. My methods efficiently recover much of the expressiveness of dynamic shapes from the silhouettes alone. Furthermore, the reconstruction quality is greatly improved by including surface orientations (normals). In order to make reconstruction more practical, I strive to capture dynamic shapes in their natural environment, which I do by using hybrid inertial and acoustic sensors. After capture, the reconstructed dynamic shapes are analyzed in order to enhance their utility. My algorithms then allow animators to generate novel motions, such as transferring facial performances from one actor onto another using multi-linear models. The presented research provides some of the first and most accurate reconstructions of complex moving surfaces, and is among the few approaches that establish a relationship between different dynamic shapes.by Daniel Vlasic.Ph.D

    Robust surface modelling of visual hull from multiple silhouettes

    Get PDF
    Reconstructing depth information from images is one of the actively researched themes in computer vision and its application involves most vision research areas from object recognition to realistic visualisation. Amongst other useful vision-based reconstruction techniques, this thesis extensively investigates the visual hull (VH) concept for volume approximation and its robust surface modelling when various views of an object are available. Assuming that multiple images are captured from a circular motion, projection matrices are generally parameterised in terms of a rotation angle from a reference position in order to facilitate the multi-camera calibration. However, this assumption is often violated in practice, i.e., a pure rotation in a planar motion with accurate rotation angle is hardly realisable. To address this problem, at first, this thesis proposes a calibration method associated with the approximate circular motion. With these modified projection matrices, a resulting VH is represented by a hierarchical tree structure of voxels from which surfaces are extracted by the Marching cubes (MC) algorithm. However, the surfaces may have unexpected artefacts caused by a coarser volume reconstruction, the topological ambiguity of the MC algorithm, and imperfect image processing or calibration result. To avoid this sensitivity, this thesis proposes a robust surface construction algorithm which initially classifies local convex regions from imperfect MC vertices and then aggregates local surfaces constructed by the 3D convex hull algorithm. Furthermore, this thesis also explores the use of wide baseline images to refine a coarse VH using an affine invariant region descriptor. This improves the quality of VH when a small number of initial views is given. In conclusion, the proposed methods achieve a 3D model with enhanced accuracy. Also, robust surface modelling is retained when silhouette images are degraded by practical noise

    Haptic holography : an early computational plastic

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2001.Includes bibliographical references (p. 135-148).This dissertation introduces haptic holography, a combination of computational modeling and multimodal spatial display, as an early computationalplastic In this work, we combine various holographic displays with a force feedback device to image free-standing material surfaces with programmatically prescribed behavior. We present three implementations, Touch, Lathe, and Poke, each named for the primitive functional affordance it offers. In Touch, we present static holographic images of simple geometry, reconstructed in front of the hologram plane (in the viewer's space), and precisely co-located with a force model of the same geometry. These images can be visually inspected and haptically explored using a hand-held interface. In Lathe, we again display holo-haptic images of simple geometry, this time allowing those images to be reshaped by haptic interaction in a dynamic but constrained manner. Finally in Poke, we present a holo-haptic image that permits arbitrary reshaping of its reconstructed surface. As supporting technology, we offer a new technique for incrementally computing and locally updating interference-modeled holographic fringe patterns. This technique permits electronic holograms to be updated arbitrarily and interactively, marking a long-held goal in display holography. As a broader contribution, we offer a new behavior-based spatial framework, based on both perception and action, for informing the design of spatial interactive systems.Wendy J. Plesniak.Ph.D

    Robust surface modelling of visual hull from multiple silhouettes

    Get PDF
    Reconstructing depth information from images is one of the actively researched themes in computer vision and its application involves most vision research areas from object recognition to realistic visualisation. Amongst other useful vision-based reconstruction techniques, this thesis extensively investigates the visual hull (VH) concept for volume approximation and its robust surface modelling when various views of an object are available. Assuming that multiple images are captured from a circular motion, projection matrices are generally parameterised in terms of a rotation angle from a reference position in order to facilitate the multi-camera calibration. However, this assumption is often violated in practice, i.e., a pure rotation in a planar motion with accurate rotation angle is hardly realisable. To address this problem, at first, this thesis proposes a calibration method associated with the approximate circular motion. With these modified projection matrices, a resulting VH is represented by a hierarchical tree structure of voxels from which surfaces are extracted by the Marching cubes (MC) algorithm. However, the surfaces may have unexpected artefacts caused by a coarser volume reconstruction, the topological ambiguity of the MC algorithm, and imperfect image processing or calibration result. To avoid this sensitivity, this thesis proposes a robust surface construction algorithm which initially classifies local convex regions from imperfect MC vertices and then aggregates local surfaces constructed by the 3D convex hull algorithm. Furthermore, this thesis also explores the use of wide baseline images to refine a coarse VH using an affine invariant region descriptor. This improves the quality of VH when a small number of initial views is given. In conclusion, the proposed methods achieve a 3D model with enhanced accuracy. Also, robust surface modelling is retained when silhouette images are degraded by practical noise

    From Image-based Motion Analysis to Free-Viewpoint Video

    Get PDF
    The problems of capturing real-world scenes with cameras and automatically analyzing the visible motion have traditionally been in the focus of computer vision research. The photo-realistic rendition of dynamic real-world scenes, on the other hand, is a problem that has been investigated in the field of computer graphics. In this thesis, we demonstrate that the joint solution to all three of these problems enables the creation of powerful new tools that are benecial for both research disciplines. Analysis and rendition of real-world scenes with human actors are amongst the most challenging problems. In this thesis we present new algorithmic recipes to attack them. The dissertation consists of three parts: In part I, we present novel solutions to two fundamental problems of human motion analysis. Firstly, we demonstrate a novel hybrid approach for markerfree human motion capture from multiple video streams. Thereafter, a new algorithm for automatic non-intrusive estimation of kinematic body models of arbitrary moving subjects from video is detailed. In part II of the thesis, we demonstrate that a marker-free motion capture approach makes possible the model-based reconstruction of free-viewpoint videos of human actors from only a handful of video streams. The estimated 3D videos enable the photo-realistic real-time rendition of a dynamic scene from arbitrary novel viewpoints. Texture information from video is not only applied to generate a realistic surface appearance, but also to improve the precision of the motion estimation scheme. The commitment to a generic body model also allows us to reconstruct a time-varying reflectance description of an actor`s body surface which allows us to realistically render the free-viewpoint videos under arbitrary lighting conditions. A novel method to capture high-speed large scale motion using regular still cameras and the principle of multi-exposure photography is described in part III. The fundamental principles underlying the methods in this thesis are not only applicable to humans but to a much larger class of subjects. It is demonstrated that, in conjunction, our proposed algorithmic recipes serve as building blocks for the next generation of immersive 3D visual media.Die Entwicklung neuer Algorithmen zur optischen Erfassung und Analyse der Bewegung in dynamischen Szenen ist einer der Forschungsschwerpunkte in der computergestützten Bildverarbeitung. Während im maschinellen Bildverstehen das Augenmerk auf der Extraktion von Informationen liegt, konzentriert sich die Computergrafik auf das inverse Problem, die fotorealistische Darstellung bewegter Szenen. In jüngster Vergangenheit haben sich die beiden Disziplinen kontinuierlich angenähert, da es eine Vielzahl an herausfordernden wissenschaftlichen Fragestellungen gibt, die eine gemeinsame Lösung des Bilderfassungs-, des Bildanalyse- und des Bildsyntheseproblems verlangen. Zwei der schwierigsten Probleme, welche für Forscher aus beiden Disziplinen eine große Relevanz besitzen, sind die Analyse und die Synthese von dynamischen Szenen, in denen Menschen im Mittelpunkt stehen. Im Rahmen dieser Dissertation werden Verfahren vorgestellt, welche die optische Erfassung dieser Art von Szenen, die automatische Analyse der Bewegungen und die realistische neue Darstellung im Computer erlauben. Es wid deutlich werden, dass eine Integration von Algorithmen zur Lösung dieser drei Probleme in ein Gesamtsystem die Erzeugung völlig neuartiger dreidimensionaler Darstellungen von Menschen in Bewegung ermöglicht. Die Dissertation ist in drei Teile gegliedert: Teil I beginnt mit der Beschreibung des Entwurfs und des Baus eines Studios zur zeitsynchronen Erfassung mehrerer Videobildströme. Die im Studio aufgezeichneten Multivideosequenzen dienen als Eingabedaten für die im Rahmen dieser Dissertation entwickelten videogestützten Bewegunsanalyseverfahren und die Algorithmen zur Erzeugung dreidimensionaler Videos. Im Anschluß daran werden zwei neu entwickelte Verfahren vorgestellt, die Antworten auf zwei fundamentale Fragen in der optischen Erfassung menschlicher Bewegung geben, die Messung von Bewegungsparametern und die Erzeugung von kinematischen Skelettmodellen. Das erste Verfahren ist ein hybrider Algorithmus zur markierungslosen optischen Messung von Bewegunsgparametern aus Multivideodaten. Der Verzicht auf optische Markierungen wird dadurch ermöglicht, dass zur Bewegungsanalyse sowohl aus den Bilddaten rekonstruierte Volumenmodelle als auch leicht zu erfassende Körpermerkmale verwendet werden. Das zweite Verfahren dient der automatischen Rekonstruktion eines kinematischen Skelettmodells anhand von Multivideodaten. Der Algorithmus benötigt weder optischen Markierungen in der Szene noch a priori Informationen über die Körperstruktur, und ist in gleicher Form auf Menschen, Tiere und Objekte anwendbar. Das Thema das zweiten Teils dieser Arbeit ist ein modellbasiertes Verfahrenzur Rekonstruktion dreidimensionaler Videos von Menschen in Bewegung aus nur wenigen zeitsynchronen Videoströmen. Der Betrachter kann die errechneten 3D Videos auf einem Computer in Echtzeit abspielen und dabei interaktiv einen beliebigen virtuellen Blickpunkt auf die Geschehnisse einnehmen. Im Zentrum unseres Ansatzes steht ein silhouettenbasierter Analyse-durch-Synthese Algorithmus, der es ermöglicht, ohne optische Markierungen sowohl die Form als auch die Bewegung eines Menschen zu erfassen. Durch die Berechnung zeitveränderlicher Oberächentexturen aus den Videodaten ist gewährleistet, dass eine Person aus jedem beliebigen Blickwinkel ein fotorealistisches Erscheinungsbild besitzt. In einer ersten algorithmischen Erweiterung wird gezeigt, dass die Texturinformation auch zur Verbesserung der Genauigkeit der Bewegunsgssch ätzung eingesetzt werden kann. Zudem ist es durch die Verwendung eines generischen Körpermodells möglich, nicht nur dynamische Texturen sondern sogar dynamische Reektionseigenschaften der Körperoberäche zu messen. Unser Reektionsmodell besteht aus einer parametrischen BRDF für jeden Texel und einer dynamischen Normalenkarte für die gesamte Körperoberäche. Auf diese Weise können 3D Videos auch unter völlig neuen simulierten Beleuchtungsbedingungen realistisch wiedergegeben werden. Teil III dieser Arbeit beschreibt ein neuartiges Verfahren zur optischen Messung sehr schneller Bewegungen. Bisher erforderten optische Aufnahmen von Hochgeschwindigkeitsbewegungen sehr teure Spezialkameras mit hohen Bildraten. Im Gegensatz dazu verwendet die hier beschriebene Methode einfache Digitalfotokameras und das Prinzip der Multiblitzfotograe. Es wird gezeigt, dass mit Hilfe dieses Verfahrens sowohl die sehr schnelle artikulierte Handbewegung des Werfers als auch die Flugparameter des Balls während eines Baseballpitches gemessen werden können. Die hochgenau erfaßten Parameter ermöglichen es, die gemessene Bewegung in völlig neuer Weise im Computer zu visualisieren. Obgleich die in dieser Dissertation vorgestellten Verfahren vornehmlich der Analyse und Darstellung menschlicher Bewegungen dienen, sind die grundlegenden Prinzipien auch auf viele anderen Szenen anwendbar. Jeder der beschriebenen Algorithmen löst zwar in erster Linie ein bestimmtes Teilproblem, aber in Ihrer Gesamtheit können die Verfahren als Bausteine verstanden werden, welche die nächste Generation interaktiver dreidimensionaler Medien ermöglichen werden

    Calibration of non-conventional imaging systems

    Get PDF

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed

    Efficient acquisition, representation and rendering of light fields

    Get PDF
    In this thesis we discuss the representation of three-dimensional scenes using image data (image-based rendering), and more precisely the so-called light field approach. We start with an up-to-date survey on previous work in this young field of research. Then we propose a light field representation based on image data and additional per-pixel depth values. This enables us to reconstruct arbitrary views of the scene in an efficient way and with high quality. Furtermore, we can use the same representation to determine optimal reference views during the acquisition of a light field. We further present the so-called free form parameterization, which allows for a relatively free placement of reference views. Finally, we demonstrate a prototype of the Lumi-Shelf system, which acquires, transmits, and renders the light field of a dynamic scene at multiple frames per second.Diese Doktorarbeit beschäftigt sich mit der Repräsentierung dreidimensionaler Szenen durch Bilddaten (engl. image-based rendering, deutsch bildbasierte Bildsynthese), speziell mit dem Ansatz des sog. Lichtfelds. Nach einem aktuellen Überblick über bisherige Arbeiten in diesem jungen Forschungsgebiet stellen wir eine Datenrepräsentation vor, die auf Bilddaten mit zusätzlichen Tiefenwerten basiert. Damit sind wir in der Lage, beliebige Ansichten der Szene effizient und in hoher Qualität zu rekonstruieren sowie die optimalen Referenz-Ansichten bei der Akquisition eines Lichtfelds zu bestimmen. Weiterhin präsentieren wir die sog. Freiform-Parametrisierung, die eine relativ freie Anordnung der Referenz-Ansichten erlaubt. Abschließend demonstrieren wir einen Prototyp des Lumishelf-Systems, welches die Aufnahme, Übertragung und Darstellung des Lichtfeldes einer dynamischen Szene mit mehreren Bildern pro Sekunde ermöglicht

    A Survey of Surface Reconstruction from Point Clouds

    Get PDF
    International audienceThe area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contains a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece-wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations – not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections, and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques, and provide directions for future work in surface reconstruction
    corecore