24 research outputs found
Capturing Hands in Action using Discriminative Salient Points and Physics Simulation
Hand motion capture is a popular research field, recently gaining more
attention due to the ubiquity of RGB-D sensors. However, even most recent
approaches focus on the case of a single isolated hand. In this work, we focus
on hands that interact with other hands or objects and present a framework that
successfully captures motion in such interaction scenarios for both rigid and
articulated objects. Our framework combines a generative model with
discriminatively trained salient points to achieve a low tracking error and
with collision detection and physics simulation to achieve physically plausible
estimates even in case of occlusions and missing visual data. Since all
components are unified in a single objective function which is almost
everywhere differentiable, it can be optimized with standard optimization
techniques. Our approach works for monocular RGB-D sequences as well as setups
with multiple synchronized RGB cameras. For a qualitative and quantitative
evaluation, we captured 29 sequences with a large variety of interactions and
up to 150 degrees of freedom.Comment: Accepted for publication by the International Journal of Computer
Vision (IJCV) on 16.02.2016 (submitted on 17.10.14). A combination into a
single framework of an ECCV'12 multicamera-RGB and a monocular-RGBD GCPR'14
hand tracking paper with several extensions, additional experiments and
detail
Cooperative multitarget tracking with efficient split and merge handling
Copyright © 2006 IEEEFor applications such as behavior recognition it is important to maintain the identity of multiple targets, while tracking them in the presence of splits and merges, or occlusion of the targets by background obstacles. Here we propose an algorithm to handle multiple splits and merges of objects based on dynamic programming and a new geometric shape matching measure. We then cooperatively combine Kalman filter-based motion and shape tracking with the efficient and novel geometric shape matching algorithm. The system is fully automatic and requires no manual input of any kind for initialization of tracking. The target track initialization problem is formulated as computation of shortest paths in a directed and attributed graph using Dijkstra's shortest path algorithm. This scheme correctly initializes multiple target tracks for tracking even in the presence of clutter and segmentation errors which may occur in detecting a target. We present results on a large number of real world image sequences, where upto 17 objects have been tracked simultaneously in real-time, despite clutter, splits, and merges in measurements of objects. The complete tracking system including segmentation of moving objects works at 25 Hz on 352times288 pixel color image sequences on a 2.8-GHz Pentium-4 workstationPankaj Kumar, Surendra Ranganath, Kuntal Sengupta, and Huang Weimi
Crowd Behavior Analysis and Classification using Graph Theoretic Approach
Surveillance systems are commonly used for security and monitoring. The need to automate these systems is well understood. To address this issue we introduce the Graph theoretic approach based Crowd Behavior Analysis and Classification System (GCBACS). The crowd behavior is observed based on the motion trajectories of the personnel in the crowd. Optical flow methods are used to obtain the streak lines and path lines of the crowd personnel trajectories. The streak flow is constructed based on the path and streak lines. The personnel and their respective potential vectors obtained from the streak flows are used to represent each frame as a graph. The frames of the surveillance videos are analyzed using graph theoretic approaches. The cumulative variation in all the frames is computed and a threshold based mechanism is used for classification and activity recognition. The experimental results discussed in the paper prove the efficiency and robustness of the proposed GCBACS for crowd behavior analysis and classification
Capturing Hand-Object Interaction and Reconstruction of Manipulated Objects
Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however, even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priori knowledge of the object’s shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning e↵ectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically using RGB-D data. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow
Multiple human tracking in RGB-depth data: A survey
© The Institution of Engineering and Technology. Multiple human tracking (MHT) is a fundamental task in many computer vision applications. Appearance-based approaches, primarily formulated on RGB data, are constrained and affected by problems arising from occlusions and/or illumination variations. In recent years, the arrival of cheap RGB-depth devices has led to many new approaches to MHT, and many of these integrate colour and depth cues to improve each and every stage of the process. In this survey, the authors present the common processing pipeline of these methods and review their methodology based (a) on how they implement this pipeline and (b) on what role depth plays within each stage of it. They identify and introduce existing, publicly available, benchmark datasets and software resources that fuse colour and depth data for MHT. Finally, they present a brief comparative evaluation of the performance of those works that have applied their methods to these datasets
Integrating water-energy-nexus in carbon footprint analysis in water utility company
The purpose of this paper is to highlight the water-energy-nexus within the context of carbon footprint methodology and water utility industry. In particular, the carbon management for water utility industry is crucial in reducing carbon emission within the upstream water distribution system. The concept of water-energy nexus alone however can be misleading due to exclusion of indirect and embodied energy involved in the water production. The study highlights the total energy use within water supply system as well as embedded carbon emission through carbon footprint methodology. The case study approach is used as a research method. The carbon footprint analysis includes data collection from water utility company; and data identification of direct and indirect carbon emission from corporation operation. The result indicates that the indirect and embodied energy may not be significant in certain operation area but the energy use may be ambiguous when these elements are excluded. Integrating carbon footprint methodology within the water supply system can improve the understanding on water-energy-nexus when direct and indirect energy use is included in the analysis. This paper aims to benefit academics, government agencies and particularly water utility companies in integrating carbon footprint analysis in water production
From Image-based Motion Analysis to Free-Viewpoint Video
The problems of capturing real-world scenes with cameras and automatically analyzing the visible motion have traditionally been in the focus of computer vision research. The photo-realistic rendition of dynamic real-world scenes, on the other hand, is a problem that has been investigated in the field of computer graphics. In this thesis, we demonstrate that the joint solution to all three of these problems enables the creation of powerful new tools that are benecial for both research disciplines. Analysis and rendition of real-world scenes with human actors are amongst the most challenging problems. In this thesis we present new algorithmic recipes to attack them. The dissertation consists of three parts: In part I, we present novel solutions to two fundamental problems of human motion analysis. Firstly, we demonstrate a novel hybrid approach for markerfree human motion capture from multiple video streams. Thereafter, a new algorithm for automatic non-intrusive estimation of kinematic body models of arbitrary moving subjects from video is detailed. In part II of the thesis, we demonstrate that a marker-free motion capture approach makes possible the model-based reconstruction of free-viewpoint videos of human actors from only a handful of video streams. The estimated 3D videos enable the photo-realistic real-time rendition of a dynamic scene from arbitrary novel viewpoints. Texture information from video is not only applied to generate a realistic surface appearance, but also to improve the precision of the motion estimation scheme. The commitment to a generic body model also allows us to reconstruct a time-varying reflectance description of an actor`s body surface which allows us to realistically render the free-viewpoint videos under arbitrary lighting conditions. A novel method to capture high-speed large scale motion using regular still cameras and the principle of multi-exposure photography is described in part III. The fundamental principles underlying the methods in this thesis are not only applicable to humans but to a much larger class of subjects. It is demonstrated that, in conjunction, our proposed algorithmic recipes serve as building blocks for the next generation of immersive 3D visual media.Die Entwicklung neuer Algorithmen zur optischen Erfassung und Analyse der
Bewegung in dynamischen Szenen ist einer der Forschungsschwerpunkte in der
computergestützten Bildverarbeitung. Während im maschinellen Bildverstehen
das Augenmerk auf der Extraktion von Informationen liegt, konzentriert sich die
Computergrafik auf das inverse Problem, die fotorealistische Darstellung bewegter Szenen. In jüngster Vergangenheit haben sich die beiden Disziplinen kontinuierlich angenähert, da es eine Vielzahl an herausfordernden wissenschaftlichen Fragestellungen gibt, die eine gemeinsame Lösung des Bilderfassungs-, des Bildanalyse- und des Bildsyntheseproblems verlangen.
Zwei der schwierigsten Probleme, welche für Forscher aus beiden Disziplinen
eine große Relevanz besitzen, sind die Analyse und die Synthese von dynamischen
Szenen, in denen Menschen im Mittelpunkt stehen. Im Rahmen dieser
Dissertation werden Verfahren vorgestellt, welche die optische Erfassung dieser
Art von Szenen, die automatische Analyse der Bewegungen und die realistische
neue Darstellung im Computer erlauben. Es wid deutlich werden, dass eine Integration
von Algorithmen zur Lösung dieser drei Probleme in ein Gesamtsystem
die Erzeugung völlig neuartiger dreidimensionaler Darstellungen von Menschen
in Bewegung ermöglicht. Die Dissertation ist in drei Teile gegliedert:
Teil I beginnt mit der Beschreibung des Entwurfs und des Baus eines Studios
zur zeitsynchronen Erfassung mehrerer Videobildströme. Die im Studio aufgezeichneten
Multivideosequenzen dienen als Eingabedaten für die im Rahmen
dieser Dissertation entwickelten videogestützten Bewegunsanalyseverfahren und
die Algorithmen zur Erzeugung dreidimensionaler Videos.
Im Anschluß daran werden zwei neu entwickelte Verfahren vorgestellt,
die Antworten auf zwei fundamentale Fragen in der optischen Erfassung
menschlicher Bewegung geben, die Messung von Bewegungsparametern und
die Erzeugung von kinematischen Skelettmodellen. Das erste Verfahren ist ein
hybrider Algorithmus zur markierungslosen optischen Messung von Bewegunsgparametern
aus Multivideodaten. Der Verzicht auf optische Markierungen
wird dadurch ermöglicht, dass zur Bewegungsanalyse sowohl aus den Bilddaten
rekonstruierte Volumenmodelle als auch leicht zu erfassende Körpermerkmale
verwendet werden. Das zweite Verfahren dient der automatischen Rekonstruktion
eines kinematischen Skelettmodells anhand von Multivideodaten. Der Algorithmus
benötigt weder optischen Markierungen in der Szene noch a priori
Informationen über die Körperstruktur, und ist in gleicher Form auf Menschen,
Tiere und Objekte anwendbar.
Das Thema das zweiten Teils dieser Arbeit ist ein modellbasiertes Verfahrenzur Rekonstruktion dreidimensionaler Videos von Menschen in Bewegung aus
nur wenigen zeitsynchronen Videoströmen. Der Betrachter kann die errechneten
3D Videos auf einem Computer in Echtzeit abspielen und dabei interaktiv
einen beliebigen virtuellen Blickpunkt auf die Geschehnisse einnehmen. Im
Zentrum unseres Ansatzes steht ein silhouettenbasierter Analyse-durch-Synthese
Algorithmus, der es ermöglicht, ohne optische Markierungen sowohl die Form
als auch die Bewegung eines Menschen zu erfassen. Durch die Berechnung
zeitveränderlicher Oberächentexturen aus den Videodaten ist gewährleistet,
dass eine Person aus jedem beliebigen Blickwinkel ein fotorealistisches Erscheinungsbild
besitzt. In einer ersten algorithmischen Erweiterung wird gezeigt, dass
die Texturinformation auch zur Verbesserung der Genauigkeit der Bewegunsgssch
ätzung eingesetzt werden kann. Zudem ist es durch die Verwendung eines
generischen Körpermodells möglich, nicht nur dynamische Texturen sondern
sogar dynamische Reektionseigenschaften der Körperoberäche zu messen.
Unser Reektionsmodell besteht aus einer parametrischen BRDF für jeden Texel
und einer dynamischen Normalenkarte für die gesamte Körperoberäche. Auf
diese Weise können 3D Videos auch unter völlig neuen simulierten Beleuchtungsbedingungen
realistisch wiedergegeben werden.
Teil III dieser Arbeit beschreibt ein neuartiges Verfahren zur optischen
Messung sehr schneller Bewegungen. Bisher erforderten optische Aufnahmen
von Hochgeschwindigkeitsbewegungen sehr teure Spezialkameras mit hohen
Bildraten. Im Gegensatz dazu verwendet die hier beschriebene Methode einfache
Digitalfotokameras und das Prinzip der Multiblitzfotograe. Es wird gezeigt, dass
mit Hilfe dieses Verfahrens sowohl die sehr schnelle artikulierte Handbewegung
des Werfers als auch die Flugparameter des Balls während eines Baseballpitches
gemessen werden können. Die hochgenau erfaßten Parameter ermöglichen es, die
gemessene Bewegung in völlig neuer Weise im Computer zu visualisieren.
Obgleich die in dieser Dissertation vorgestellten Verfahren vornehmlich der
Analyse und Darstellung menschlicher Bewegungen dienen, sind die grundlegenden
Prinzipien auch auf viele anderen Szenen anwendbar. Jeder der beschriebenen
Algorithmen löst zwar in erster Linie ein bestimmtes Teilproblem, aber in Ihrer
Gesamtheit können die Verfahren als Bausteine verstanden werden, welche die
nächste Generation interaktiver dreidimensionaler Medien ermöglichen werden