15 research outputs found
Scalable 3D Tracking of Multiple Interacting Objects
We consider the problem of tracking multiple interact-ing objects in 3D, using RGBD input and by considering a hypothesize-and-test approach. Due to their interaction, objects to be tracked are expected to occlude each other in the field of view of the camera observing them. A naive approach would be to employ a Set of Independent Track-ers (SIT) and to assign one tracker to each object. This approach scales well with the number of objects but fails as occlusions become stronger due to their disjoint consid-eration. The solution representing the current state of the art employs a single Joint Tracker (JT) that accounts for all objects simultaneously. This directly resolves ambigui-ties due to occlusions but has a computational complexity that grows geometrically with the number of tracked ob-jects. We propose a middle ground, namely an Ensemble of Collaborative Trackers (ECT), that combines best traits from both worlds to deliver a practical and accurate solu-tion to the multi-object 3D tracking problem. We present quantitative and qualitative experiments with several syn-thetic and real world sequences of diverse complexity. Ex-periments demonstrate that ECT manages to track far more complex scenes than JT at a computational time that is only slightly larger than that of SIT. 1
Learning Explicit Contact for Implicit Reconstruction of Hand-held Objects from Monocular Images
Reconstructing hand-held objects from monocular RGB images is an appealing
yet challenging task. In this task, contacts between hands and objects provide
important cues for recovering the 3D geometry of the hand-held objects. Though
recent works have employed implicit functions to achieve impressive progress,
they ignore formulating contacts in their frameworks, which results in
producing less realistic object meshes. In this work, we explore how to model
contacts in an explicit way to benefit the implicit reconstruction of hand-held
objects. Our method consists of two components: explicit contact prediction and
implicit shape reconstruction. In the first part, we propose a new subtask of
directly estimating 3D hand-object contacts from a single image. The part-level
and vertex-level graph-based transformers are cascaded and jointly learned in a
coarse-to-fine manner for more accurate contact probabilities. In the second
part, we introduce a novel method to diffuse estimated contact states from the
hand mesh surface to nearby 3D space and leverage diffused contact
probabilities to construct the implicit neural representation for the
manipulated object. Benefiting from estimating the interaction patterns between
the hand and the object, our method can reconstruct more realistic object
meshes, especially for object parts that are in contact with hands. Extensive
experiments on challenging benchmarks show that the proposed method outperforms
the current state of the arts by a great margin.Comment: 17 pages, 8 figure
Capturing Hands in Action using Discriminative Salient Points and Physics Simulation
Hand motion capture is a popular research field, recently gaining more
attention due to the ubiquity of RGB-D sensors. However, even most recent
approaches focus on the case of a single isolated hand. In this work, we focus
on hands that interact with other hands or objects and present a framework that
successfully captures motion in such interaction scenarios for both rigid and
articulated objects. Our framework combines a generative model with
discriminatively trained salient points to achieve a low tracking error and
with collision detection and physics simulation to achieve physically plausible
estimates even in case of occlusions and missing visual data. Since all
components are unified in a single objective function which is almost
everywhere differentiable, it can be optimized with standard optimization
techniques. Our approach works for monocular RGB-D sequences as well as setups
with multiple synchronized RGB cameras. For a qualitative and quantitative
evaluation, we captured 29 sequences with a large variety of interactions and
up to 150 degrees of freedom.Comment: Accepted for publication by the International Journal of Computer
Vision (IJCV) on 16.02.2016 (submitted on 17.10.14). A combination into a
single framework of an ECCV'12 multicamera-RGB and a monocular-RGBD GCPR'14
hand tracking paper with several extensions, additional experiments and
detail
Generalized Feedback Loop for Joint Hand-Object Pose Estimation
We propose an approach to estimating the 3D pose of a hand, possibly handling
an object, given a depth image. We show that we can correct the mistakes made
by a Convolutional Neural Network trained to predict an estimate of the 3D pose
by using a feedback loop. The components of this feedback loop are also Deep
Networks, optimized using training data. This approach can be generalized to a
hand interacting with an object. Therefore, we jointly estimate the 3D pose of
the hand and the 3D pose of the object. Our approach performs en-par with
state-of-the-art methods for 3D hand pose estimation, and outperforms
state-of-the-art methods for joint hand-object pose estimation when using depth
images only. Also, our approach is efficient as our implementation runs in
real-time on a single GPU.Comment: arXiv admin note: substantial text overlap with arXiv:1609.0969
Planning Framework for Robotic Pizza Dough Stretching with a Rolling Pin
Stretching a pizza dough with a rolling pin is a nonprehensile manipulation. Since the object is deformable, force closure cannot be established, and the manipulation is carried out in a nonprehensile way. The framework of this pizza dough stretching application that is explained in this chapter consists of four sub-procedures: (i) recognition of the pizza dough on a plate, (ii) planning the necessary steps to shape the pizza dough to the desired form, (iii) path generation for a rolling pin to execute the output of the pizza dough planner, and (iv) inverse kinematics for the bi-manual robot to grasp and control the rolling pin properly. Using the deformable object model described in Chap. 3, each sub-procedure of the proposed framework is explained sequentially
Computational Learning for Hand Pose Estimation
Rapid advances in human–computer interaction interfaces have been promising a realistic environment for gaming and entertainment in the last few years. However, the use of traditional input devices such as trackballs, keyboards, or joysticks has been a bottleneck for natural interactions between a human and computer as two points of freedom of these devices cannot suitably emulate the interactions in a three-dimensional space. Consequently, a comprehensive hand tracking technology is expected as a smart and intuitive option to these input tools to enhance virtual and augmented reality experiences. In addition, the recent emergence of low-cost depth sensing cameras has led to their broad use of RGB-D data in computer vision, raising expectations of a full 3D interpretation of hand movements for human–computer interaction interfaces. Although the use of hand gestures or hand postures has become essential for a wide range of applications in computer games and augmented/virtual reality, 3D hand pose estimation is still an open and challenging problem because of the following reasons: (i) the hand pose exists in a high-dimensional space because each finger and the palm is associated with several degrees of freedom, (ii) the fingers exhibit self-similarity and often occlude to each other, (iii) global 3D rotations make pose estimation more difficult, and (iv) hands only exist in few pixels in images and the noise in acquired data coupled with fast finger movement confounds continuous hand tracking. The success of hand tracking would naturally depend on synthesizing our knowledge of the hand (i.e., geometric shape, constraints on pose configurations) and latent features about hand poses from the RGB-D data stream (i.e., region of interest, key feature points like finger tips and joints, and temporal continuity). In this thesis, we propose novel methods to leverage the paradigm of analysis by synthesis and create a prediction model using a population of realistic 3D hand poses. The overall goal of this work is to design a concrete framework so the computers can learn and understand about perceptual attributes of human hands (i.e., self-occlusions or self-similarities of the fingers) and to develop a pragmatic solution to the real-time hand pose estimation problem implementable on a standard computer.
This thesis can be broadly divided into four parts: learning hand (i) from recommendiations of similar hand poses, (ii) from low-dimensional visual representations, (iii) by hallucinating geometric representations, and (iv) from a manipulating object. Each research work covers our algorithmic contributions to solve the 3D hand pose estimation problem. Additionally, the research work in the appendix proposes a pragmatic technique for applying our ideas to mobile devices with low computational power. Following a given structure, we first overview the most relevant works on depth sensor-based 3D hand pose estimation in the literature both with and without manipulating an object. Two different approaches prevalent for categorizing hand pose estimation, model-based methods and appearance-based methods, are discussed in detail. In this chapter, we also introduce some works relevant to deep learning and trials to achieve efficient compression of the network structure. Next, we describe a synthetic 3D hand model and its motion constraints for simulating realistic human hand movements. The section for the primary research work starts in the following chapter. We discuss our attempts to produce a better estimation model for 3D hand pose estimation by learning hand articulations from recommendations of similar poses. Specifically, the unknown pose parameters for input depth data are estimated by collaboratively learning the known parameters of all neighborhood poses. Subsequently, we discuss deep-learned, discriminative, and low-dimensional features and a hierarchical solution of the stated problem based on the matrix completion framework. This work is further extended by incorporating a function of geometric properties on the surface of the hand described by heat diffusion, which is robust to capture both the local geometry of the hand and global structural representations. The problem of the hands interactions with a physical object is also considered in the following chapter. The main insight is that the interacting object can be a source of constraint on hand poses. In this view, we employ pose dependency on the shape of the object to learn the discriminative features of the hand–object interaction, rather than losing hand information caused by partial or full object occlusions. Subsequently, we present a compressive learning technique in the appendix. Our approach is flexible, enabling us to add more layers and go deeper in the deep learning architecture while keeping the number of parameters the same. Finally, we conclude this thesis work by summarizing the presented approaches for hand pose estimation and then propose future directions to further achieve performance improvements through (i) realistically rendered synthetic hand images, (ii) incorporating RGB images as an input, (iii) hand perseonalization, (iv) use of unstructured point cloud, and (v) embedding sensing techniques
Capturing Hand-Object Interaction and Reconstruction of Manipulated Objects
Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however, even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priori knowledge of the object’s shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning e↵ectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically using RGB-D data. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow