65 research outputs found
Optimization Model for Planning Precision Grasps with Multi-Fingered Hands
Precision grasps with multi-fingered hands are important for precise
placement and in-hand manipulation tasks. Searching precision grasps on the
object represented by point cloud, is challenging due to the complex object
shape, high-dimensionality, collision and undesired properties of the sensing
and positioning. This paper proposes an optimization model to search for
precision grasps with multi-fingered hands. The model takes noisy point cloud
of the object as input and optimizes the grasp quality by iteratively searching
for the palm pose and finger joints positions. The collision between the hand
and the object is approximated and penalized by a series of least-squares. The
collision approximation is able to handle the point cloud representation of the
objects with complex shapes. The proposed optimization model is able to locate
collision-free optimal precision grasps efficiently. The average computation
time is 0.50 sec/grasp. The searching is robust to the incompleteness and noise
of the point cloud. The effectiveness of the algorithm is demonstrated by
experiments.Comment: Submitted to IROS2019, experiment on BarrettHand, 8 page
Visuo-Haptic Grasping of Unknown Objects through Exploration and Learning on Humanoid Robots
Die vorliegende Arbeit befasst sich mit dem Greifen unbekannter Objekte durch humanoide Roboter. Dazu werden visuelle Informationen mit haptischer Exploration kombiniert, um Greifhypothesen zu erzeugen. Basierend auf simulierten Trainingsdaten wird außerdem eine Greifmetrik gelernt, welche die Erfolgswahrscheinlichkeit der Greifhypothesen bewertet und die mit der größten geschätzten Erfolgswahrscheinlichkeit auswählt. Diese wird verwendet, um Objekte mit Hilfe einer reaktiven Kontrollstrategie zu greifen. Die zwei Kernbeiträge der Arbeit sind zum einen die haptische Exploration von unbekannten Objekten und zum anderen das Greifen von unbekannten Objekten mit Hilfe einer neuartigen datengetriebenen Greifmetrik
Understanding egocentric human actions with temporal decision forests
Understanding human actions is a fundamental task in computer vision with a wide range of applications including pervasive health-care, robotics and game control. This thesis focuses on the problem of egocentric action recognition from RGB-D data, wherein the world is viewed through the eyes of the actor whose hands describe the actions.
The main contributions of this work are its findings regarding egocentric actions as described by hands in two application scenarios and a proposal of a new technique that is based on temporal decision forests. The thesis first introduces a novel framework to recognise fingertip writing in mid-air in the context of human-computer interaction. This framework detects whether the user is writing and tracks the fingertip over time to generate spatio-temporal trajectories that are recognised by using a Hough forest variant that encourages temporal consistency in prediction. A problem with using such forest approach for action recognition is that the learning of temporal dynamics is limited to hand-crafted temporal features and temporal regression, which may break the temporal continuity and lead to inconsistent predictions. To overcome this limitation, the thesis proposes transition forests. Besides any temporal information that is encoded in the feature space, the forest automatically learns the temporal dynamics during training, and it is exploited in inference in an online and efficient manner achieving state-of-the-art results. The last contribution of this thesis is its introduction of the first RGB-D benchmark to allow for the study of egocentric hand-object actions with both hand and object pose annotations. This study conducts an extensive evaluation of different baselines, state-of-the art approaches and temporal decision forest models using colour, depth and hand pose features. Furthermore, it extends the transition forest model to incorporate data from different modalities and demonstrates the benefit of using hand pose features to recognise egocentric human actions. The thesis concludes by discussing and analysing the contributions and proposing a few ideas for future work.Open Acces
- …