38 research outputs found
Human-Machine Interface for Remote Training of Robot Tasks
Regardless of their industrial or research application, the streamlining of
robot operations is limited by the proximity of experienced users to the actual
hardware. Be it massive open online robotics courses, crowd-sourcing of robot
task training, or remote research on massive robot farms for machine learning,
the need to create an apt remote Human-Machine Interface is quite prevalent.
The paper at hand proposes a novel solution to the programming/training of
remote robots employing an intuitive and accurate user-interface which offers
all the benefits of working with real robots without imposing delays and
inefficiency. The system includes: a vision-based 3D hand detection and gesture
recognition subsystem, a simulated digital twin of a robot as visual feedback,
and the "remote" robot learning/executing trajectories using dynamic motion
primitives. Our results indicate that the system is a promising solution to the
problem of remote training of robot tasks.Comment: Accepted in IEEE International Conference on Imaging Systems and
Techniques - IST201
Simultaneous Hand Pose and Skeleton Bone-Lengths Estimation from a Single Depth Image
Articulated hand pose estimation is a challenging task for human-computer
interaction. The state-of-the-art hand pose estimation algorithms work only
with one or a few subjects for which they have been calibrated or trained.
Particularly, the hybrid methods based on learning followed by model fitting or
model based deep learning do not explicitly consider varying hand shapes and
sizes. In this work, we introduce a novel hybrid algorithm for estimating the
3D hand pose as well as bone-lengths of the hand skeleton at the same time,
from a single depth image. The proposed CNN architecture learns hand pose
parameters and scale parameters associated with the bone-lengths
simultaneously. Subsequently, a new hybrid forward kinematics layer employs
both parameters to estimate 3D joint positions of the hand. For end-to-end
training, we combine three public datasets NYU, ICVL and MSRA-2015 in one
unified format to achieve large variation in hand shapes and sizes. Among
hybrid methods, our method shows improved accuracy over the state-of-the-art on
the combined dataset and the ICVL dataset that contain multiple subjects. Also,
our algorithm is demonstrated to work well with unseen images.Comment: This paper has been accepted and presented in 3DV-2017 conference
held at Qingdao, China. http://irc.cs.sdu.edu.cn/3dv
Structure-Aware Shape Synthesis
We propose a new procedure to guide training of a data-driven shape
generative model using a structure-aware loss function. Complex 3D shapes often
can be summarized using a coarsely defined structure which is consistent and
robust across variety of observations. However, existing synthesis techniques
do not account for structure during training, and thus often generate
implausible and structurally unrealistic shapes. During training, we enforce
structural constraints in order to enforce consistency and structure across the
entire manifold. We propose a novel methodology for training 3D generative
models that incorporates structural information into an end-to-end training
pipeline.Comment: Accepted to 3DV 201