53 research outputs found

    From Image-based Motion Analysis to Free-Viewpoint Video

    Get PDF
    The problems of capturing real-world scenes with cameras and automatically analyzing the visible motion have traditionally been in the focus of computer vision research. The photo-realistic rendition of dynamic real-world scenes, on the other hand, is a problem that has been investigated in the field of computer graphics. In this thesis, we demonstrate that the joint solution to all three of these problems enables the creation of powerful new tools that are benecial for both research disciplines. Analysis and rendition of real-world scenes with human actors are amongst the most challenging problems. In this thesis we present new algorithmic recipes to attack them. The dissertation consists of three parts: In part I, we present novel solutions to two fundamental problems of human motion analysis. Firstly, we demonstrate a novel hybrid approach for markerfree human motion capture from multiple video streams. Thereafter, a new algorithm for automatic non-intrusive estimation of kinematic body models of arbitrary moving subjects from video is detailed. In part II of the thesis, we demonstrate that a marker-free motion capture approach makes possible the model-based reconstruction of free-viewpoint videos of human actors from only a handful of video streams. The estimated 3D videos enable the photo-realistic real-time rendition of a dynamic scene from arbitrary novel viewpoints. Texture information from video is not only applied to generate a realistic surface appearance, but also to improve the precision of the motion estimation scheme. The commitment to a generic body model also allows us to reconstruct a time-varying reflectance description of an actor`s body surface which allows us to realistically render the free-viewpoint videos under arbitrary lighting conditions. A novel method to capture high-speed large scale motion using regular still cameras and the principle of multi-exposure photography is described in part III. The fundamental principles underlying the methods in this thesis are not only applicable to humans but to a much larger class of subjects. It is demonstrated that, in conjunction, our proposed algorithmic recipes serve as building blocks for the next generation of immersive 3D visual media.Die Entwicklung neuer Algorithmen zur optischen Erfassung und Analyse der Bewegung in dynamischen Szenen ist einer der Forschungsschwerpunkte in der computergestützten Bildverarbeitung. Während im maschinellen Bildverstehen das Augenmerk auf der Extraktion von Informationen liegt, konzentriert sich die Computergrafik auf das inverse Problem, die fotorealistische Darstellung bewegter Szenen. In jüngster Vergangenheit haben sich die beiden Disziplinen kontinuierlich angenähert, da es eine Vielzahl an herausfordernden wissenschaftlichen Fragestellungen gibt, die eine gemeinsame Lösung des Bilderfassungs-, des Bildanalyse- und des Bildsyntheseproblems verlangen. Zwei der schwierigsten Probleme, welche für Forscher aus beiden Disziplinen eine große Relevanz besitzen, sind die Analyse und die Synthese von dynamischen Szenen, in denen Menschen im Mittelpunkt stehen. Im Rahmen dieser Dissertation werden Verfahren vorgestellt, welche die optische Erfassung dieser Art von Szenen, die automatische Analyse der Bewegungen und die realistische neue Darstellung im Computer erlauben. Es wid deutlich werden, dass eine Integration von Algorithmen zur Lösung dieser drei Probleme in ein Gesamtsystem die Erzeugung völlig neuartiger dreidimensionaler Darstellungen von Menschen in Bewegung ermöglicht. Die Dissertation ist in drei Teile gegliedert: Teil I beginnt mit der Beschreibung des Entwurfs und des Baus eines Studios zur zeitsynchronen Erfassung mehrerer Videobildströme. Die im Studio aufgezeichneten Multivideosequenzen dienen als Eingabedaten für die im Rahmen dieser Dissertation entwickelten videogestützten Bewegunsanalyseverfahren und die Algorithmen zur Erzeugung dreidimensionaler Videos. Im Anschluß daran werden zwei neu entwickelte Verfahren vorgestellt, die Antworten auf zwei fundamentale Fragen in der optischen Erfassung menschlicher Bewegung geben, die Messung von Bewegungsparametern und die Erzeugung von kinematischen Skelettmodellen. Das erste Verfahren ist ein hybrider Algorithmus zur markierungslosen optischen Messung von Bewegunsgparametern aus Multivideodaten. Der Verzicht auf optische Markierungen wird dadurch ermöglicht, dass zur Bewegungsanalyse sowohl aus den Bilddaten rekonstruierte Volumenmodelle als auch leicht zu erfassende Körpermerkmale verwendet werden. Das zweite Verfahren dient der automatischen Rekonstruktion eines kinematischen Skelettmodells anhand von Multivideodaten. Der Algorithmus benötigt weder optischen Markierungen in der Szene noch a priori Informationen über die Körperstruktur, und ist in gleicher Form auf Menschen, Tiere und Objekte anwendbar. Das Thema das zweiten Teils dieser Arbeit ist ein modellbasiertes Verfahrenzur Rekonstruktion dreidimensionaler Videos von Menschen in Bewegung aus nur wenigen zeitsynchronen Videoströmen. Der Betrachter kann die errechneten 3D Videos auf einem Computer in Echtzeit abspielen und dabei interaktiv einen beliebigen virtuellen Blickpunkt auf die Geschehnisse einnehmen. Im Zentrum unseres Ansatzes steht ein silhouettenbasierter Analyse-durch-Synthese Algorithmus, der es ermöglicht, ohne optische Markierungen sowohl die Form als auch die Bewegung eines Menschen zu erfassen. Durch die Berechnung zeitveränderlicher Oberächentexturen aus den Videodaten ist gewährleistet, dass eine Person aus jedem beliebigen Blickwinkel ein fotorealistisches Erscheinungsbild besitzt. In einer ersten algorithmischen Erweiterung wird gezeigt, dass die Texturinformation auch zur Verbesserung der Genauigkeit der Bewegunsgssch ätzung eingesetzt werden kann. Zudem ist es durch die Verwendung eines generischen Körpermodells möglich, nicht nur dynamische Texturen sondern sogar dynamische Reektionseigenschaften der Körperoberäche zu messen. Unser Reektionsmodell besteht aus einer parametrischen BRDF für jeden Texel und einer dynamischen Normalenkarte für die gesamte Körperoberäche. Auf diese Weise können 3D Videos auch unter völlig neuen simulierten Beleuchtungsbedingungen realistisch wiedergegeben werden. Teil III dieser Arbeit beschreibt ein neuartiges Verfahren zur optischen Messung sehr schneller Bewegungen. Bisher erforderten optische Aufnahmen von Hochgeschwindigkeitsbewegungen sehr teure Spezialkameras mit hohen Bildraten. Im Gegensatz dazu verwendet die hier beschriebene Methode einfache Digitalfotokameras und das Prinzip der Multiblitzfotograe. Es wird gezeigt, dass mit Hilfe dieses Verfahrens sowohl die sehr schnelle artikulierte Handbewegung des Werfers als auch die Flugparameter des Balls während eines Baseballpitches gemessen werden können. Die hochgenau erfaßten Parameter ermöglichen es, die gemessene Bewegung in völlig neuer Weise im Computer zu visualisieren. Obgleich die in dieser Dissertation vorgestellten Verfahren vornehmlich der Analyse und Darstellung menschlicher Bewegungen dienen, sind die grundlegenden Prinzipien auch auf viele anderen Szenen anwendbar. Jeder der beschriebenen Algorithmen löst zwar in erster Linie ein bestimmtes Teilproblem, aber in Ihrer Gesamtheit können die Verfahren als Bausteine verstanden werden, welche die nächste Generation interaktiver dreidimensionaler Medien ermöglichen werden

    ARVISCOPE: Georeferenced Visualization of Dynamic Construction Processes in Three-Dimensional Outdoor Augmented Reality.

    Full text link
    Construction processes can be conceived as systems of discrete, interdependent activities. Discrete Event Simulation (DES) has thus evolved as an effective tool to model operations that compete over available resources (personnel, material, and equipment). A DES model has to be verified and validated to ensure that it reflects a modeler’s intentions, and faithfully represents a real operation. 3D visualization is an effective means of achieving this, and facilitating the process of communicating and accrediting simulation results. Visualization of simulated operations has traditionally been achieved in Virtual Reality (VR). In order to create convincing VR animations, detailed information about an operation and the environment has to be obtained. The data must describe the simulated processes, and provide 3D CAD models of project resources, the facility under construction, and the surrounding terrain (Model Engineering). As the size and complexity of an operation increase, such data collection becomes an arduous, impractical, and often impossible task. This directly translates into loss of financial and human resources that could otherwise be productively used. In an effort to remedy this situation, this dissertation proposes an alternate approach of visualizing simulated operations using Augmented Reality (AR) to create mixed views of real existing jobsite facilities and virtual CAD models of construction resources. The application of AR in animating simulated operations has significant potential in reducing the aforementioned Model Engineering and data collection tasks, and at the same time can help in creating visually convincing output that can be effectively communicated. This dissertation presents the design, methodology, and development of ARVISCOPE, a general purpose AR animation authoring language, and ROVER, a mobile computing hardware framework. When used together, ARVISCOPE and ROVER can create three-dimensional AR animations of any length and complexity from the results of running DES models of engineering operations. ARVISCOPE takes advantage of advanced Global Positioning System (GPS) and orientation tracking technologies to accurately track a user’s spatial context, and georeferences superimposed 3D graphics in an augmented environment. In achieving the research objectives, major technical challenges such as accurate registration, automated occlusion handling, and dynamic scene construction and manipulation have been successfully identified and addressed.Ph.D.Civil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/60761/1/abehzada_1.pd

    Programming of an educational robot to be applied in STEAM areas

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáThe world is increasingly digital. Countries around the world strive to attract and prepare future generations to occupy the positions, where, for the most part, they will focus on Science, Technology, Engineering, Arts and Mathematics (STEAM). An approach already consolidated in the literature is the use of robots applied in education to encourage students to develop essential skills such as critical thinking, problem-solving and computational thinking. This work, linked to the RoboSTEAM project, aims to explore educational robots that can be applied in this context, considering that most approaches use LEGO’s platform, which can sometimes be difficult to access due to its high price. The robot used was the mBot in which it uses the mBlock 5 software to program it, from the MakeBlock Co. Ltd. company, being applied in two educational approaches during the execution of the project in which it is based on challenge based-learning methodology. A methodology for adding sensors to the mBot has been also explored. Finally, evaluations were made about the performance of students who participated in this project

    Trajectory solutions for a game-playing robot using nonprehensile manipulation methods and machine vision

    Get PDF
    The need for autonomous systems designed to play games, both strategy-based and physical, comes from the quest to model human behaviour under tough and competitive environments that require human skill at its best. In the last two decades, and especially after the 1996 defeat of the world chess champion by a chess-playing computer, physical games have been receiving greater attention. Robocup TM, i.e. robotic football, is a well-known example, with the participation of thousands of researchers all over the world. The robots created to play snooker/pool/billiards are placed in this context. Snooker, as well as being a game of strategy, also requires accurate physical manipulation skills from the player, and these two aspects qualify snooker as a potential game for autonomous system development research. Although research into playing strategy in snooker has made considerable progress using various artificial intelligence methods, the physical manipulation part of the game is not fully addressed by the robots created so far. This thesis looks at the different ball manipulation options snooker players use, like the shots that impart spin to the ball in order to accurately position the balls on the table, by trying to predict the ball trajectories under the action of various dynamic phenomena, such as impacts. A 3-degree of freedom robot, which can manipulate the snooker cue on a par with humans, at high velocities, using a servomotor, and position the snooker cue on the ball accurately with the help of a stepper drive, is designed and fabricated. [Continues.

    LASER Tech Briefs, September 1993

    Get PDF
    This edition of LASER Tech briefs contains a feature on photonics. The other topics include: Electronic Components and Circuits. Electronic Systems, Physical Sciences, Materials, Computer Programs, Mechanics, Machinery, Fabrication Technology, Mathematics and Information Sciences, Life Sciences and books and reports

    Efficient Mission Planning for Robot Networks in Communication Constrained Environments

    Get PDF
    Many robotic systems are remotely operated nowadays that require uninterrupted connection and safe mission planning. Such systems are commonly found in military drones, search and rescue operations, mining robotics, agriculture, and environmental monitoring. Different robotic systems may employ disparate communication modalities such as radio network, visible light communication, satellite, infrared, Wi-Fi. However, in an autonomous mission where the robots are expected to be interconnected, communication constrained environment frequently arises due to the out of range problem or unavailability of the signal. Furthermore, several automated projects (building construction, assembly line) do not guarantee uninterrupted communication, and a safe project plan is required that optimizes collision risks, cost, and duration. In this thesis, we propose four pronged approaches to alleviate some of these issues: 1) Communication aware world mapping; 2) Communication preserving using the Line-of-Sight (LoS); 3) Communication aware safe planning; and 4) Multi-Objective motion planning for navigation. First, we focus on developing a communication aware world map that integrates traditional world models with the planning of multi-robot placement. Our proposed communication map selects the optimal placement of a chain of intermediate relay vehicles in order to maximize communication quality to a remote unit. We also vi propose an algorithm to build a min-Arborescence tree when there are multiple remote units to be served. Second, in communication denied environments, we use Line-of-Sight (LoS) to establish communication between mobile robots, control their movements and relay information to other autonomous units. We formulate and study the complexity of a multi-robot relay network positioning problem and propose approximation algorithms that restore visibility based connectivity through the relocation of one or more robots. Third, we develop a framework to quantify the safety score of a fully automated robotic mission where the coexistence of human and robot may pose a collision risk. A number of alternate mission plans are analyzed using motion planning algorithms to select the safest one. Finally, an efficient multi-objective optimization based path planning for the robots is developed to deal with several Pareto optimal cost attributes
    corecore