46,981 research outputs found

    Smart rehabilitation

    Get PDF
    This thesis is born from a collaboration project between the HEIG-VD and the CHUV hospital in Lausanne, Switzerland. We study the problem of human grasp recognition from first-person RGB video input data. Grasping is the action of seizing and holding firmly an object and there exist many different types. The objective is to use grasp recognition for automating the monitoring of the rehabilitation sessions of patients with upper-limb neurological disorders. We compared three different approaches based on Deep Learning. Firstly, a naive image model that is trained with the entire images. Secondly, a video model, so apart from the spatial features it also takes advantage of the temporal dimension. Lastly, an image model that is trained with images cropped around the hands, so it focuses only on the part that determines the grasp. We used the Yale Grasping Dataset for training the models. To enhance the interpretability of the results we proposed a coarse-grained grasp grouping based on the Feix grasp taxonomy. We also captured our own small first-person video grasp dataset to test the applicability of the models to our setup, which differs from the training dataset in the camera location and angle. Considering the intrinsic challenges of the data such as the frequent hand-object occlusions or the dataset difficulties like its real-world setting and the low video quality, the results are relatively good. Nevertheless, they are insufficient for deploying a satisfactory system at the hospital and remark the difficulty of grasp recognition from just egocentric RGB data. It would be interesting to further research other data modalities such as depth data or to study the problem from the perspective of hand pose estimation and object detection. It is also clear that the field lacks a more modern and large dataset

    A Taxonomy of Freehand Grasping Patterns in Virtual Reality

    Get PDF
    Grasping is the most natural and primary interaction paradigm people perform every day, which allows us to pick up and manipulate objects around us such as drinking a cup of coffee or writing with a pen. Grasping has been highly explored in real environments, to understand and structure the way people grasp and interact with objects by presenting categories, models and theories for grasping approach. Due to the complexity of the human hand, classifying grasping knowledge to provide meaningful insights is a challenging task, which led to researchers developing grasp taxonomies to provide guidelines for emerging grasping work (such as in anthropology, robotics and hand surgery) in a systematic way. While this body of work exists for real grasping, the nuances of grasping transfer in virtual environments is unexplored. The emerging development of robust hand tracking sensors for virtual devices now allow the development of grasp models that enable VR to simulate real grasping interactions. However, present work has not yet explored the differences and nuances that are present in virtual grasping compared to real object grasping, which means that virtual systems that create grasping models based on real grasping knowledge, might make assumptions which are yet to be proven true or untrue around the way users intuitively grasp and interact with virtual objects. To address this, this thesis presents the first user elicitation studies to explore grasping patterns directly in VR. The first study presents main similarities and differences between real and virtual object grasping, the second study furthers this by exploring how virtual object shape influences grasping patterns, the third study focuses on visual thermal cues and how this influences grasp metrics, and the fourth study focuses on understanding other object characteristics such as stability and complexity and how they influence grasps in VR. To provide structured insights on grasping interactions in VR, the results are synthesized in the first VR Taxonomy of Grasp Types, developed following current methods for developing grasping and HCI taxonomies and re-iterated to present an updated and more complete taxonomy. Results show that users appear to mimic real grasping behaviour in VR, however they also illustrate that users present issues around object size estimation and generally a lower variability in grasp types is used. The taxonomy shows that only five grasps account for the majority of grasp data in VR, which can be used for computer systems aiming to achieve natural and intuitive interactions at lower computational cost. Further, findings show that virtual object characteristics such as shape, stability and complexity as well as visual cues for temperature influence grasp metrics such as aperture, category, type, location and dimension. These changes in grasping patterns together with virtual object categorisation methods can be used to inform design decisions when developing intuitive interactions and virtual objects and environments and therefore taking a step forward in achieving natural grasping interaction in VR

    Task analysis of discrete and continuous skills: a dual methodology approach to human skills capture for automation

    Get PDF
    There is a growing requirement within the field of intelligent automation for a formal methodology to capture and classify explicit and tacit skills deployed by operators during complex task performance. This paper describes the development of a dual methodology approach which recognises the inherent differences between continuous tasks and discrete tasks and which proposes separate methodologies for each. Both methodologies emphasise capturing operators’ physical, perceptual, and cognitive skills, however, they fundamentally differ in their approach. The continuous task analysis recognises the non-arbitrary nature of operation ordering and that identifying suitable cues for subtask is a vital component of the skill. Discrete task analysis is a more traditional, chronologically ordered methodology and is intended to increase the resolution of skill classification and be practical for assessing complex tasks involving multiple unique subtasks through the use of taxonomy of generic actions for physical, perceptual, and cognitive actions

    Planning hand-arm grasping motions with human-like appearance

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksFinalista de l’IROS Best Application Paper Award a la 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, ICROS.This paper addresses the problem of obtaining human-like motions on hand-arm robotic systems performing pick-and-place actions. The focus is set on the coordinated movements of the robotic arm and the anthropomorphic mechanical hand, with which the arm is equipped. For this, human movements performing different grasps are captured and mapped to the robot in order to compute the human hand synergies. These synergies are used to reduce the complexity of the planning phase by reducing the dimension of the search space. In addition, the paper proposes a sampling-based planner, which guides the motion planning ollowing the synergies. The introduced approach is tested in an application example and thoroughly compared with other state-of-the-art planning algorithms, obtaining better results.Peer ReviewedAward-winningPostprint (author's final draft

    A Whole-Body Pose Taxonomy for Loco-Manipulation Tasks

    Full text link
    Exploiting interaction with the environment is a promising and powerful way to enhance stability of humanoid robots and robustness while executing locomotion and manipulation tasks. Recently some works have started to show advances in this direction considering humanoid locomotion with multi-contacts, but to be able to fully develop such abilities in a more autonomous way, we need to first understand and classify the variety of possible poses a humanoid robot can achieve to balance. To this end, we propose the adaptation of a successful idea widely used in the field of robot grasping to the field of humanoid balance with multi-contacts: a whole-body pose taxonomy classifying the set of whole-body robot configurations that use the environment to enhance stability. We have revised criteria of classification used to develop grasping taxonomies, focusing on structuring and simplifying the large number of possible poses the human body can adopt. We propose a taxonomy with 46 poses, containing three main categories, considering number and type of supports as well as possible transitions between poses. The taxonomy induces a classification of motion primitives based on the pose used for support, and a set of rules to store and generate new motions. We present preliminary results that apply known segmentation techniques to motion data from the KIT whole-body motion database. Using motion capture data with multi-contacts, we can identify support poses providing a segmentation that can distinguish between locomotion and manipulation parts of an action.Comment: 8 pages, 7 figures, 1 table with full page figure that appears in landscape page, 2015 IEEE/RSJ International Conference on Intelligent Robots and System

    A quantitative taxonomy of human hand grasps

    Get PDF
    Background: A proper modeling of human grasping and of hand movements is fundamental for robotics, prosthetics, physiology and rehabilitation. The taxonomies of hand grasps that have been proposed in scientific literature so far are based on qualitative analyses of the movements and thus they are usually not quantitatively justified. Methods: This paper presents to the best of our knowledge the first quantitative taxonomy of hand grasps based on biomedical data measurements. The taxonomy is based on electromyography and kinematic data recorded from 40 healthy subjects performing 20 unique hand grasps. For each subject, a set of hierarchical trees are computed for several signal features. Afterwards, the trees are combined, first into modality-specific (i.e. muscular and kinematic) taxonomies of hand grasps and then into a general quantitative taxonomy of hand movements. The modality-specific taxonomies provide similar results despite describing different parameters of hand movements, one being muscular and the other kinematic. Results: The general taxonomy merges the kinematic and muscular description into a comprehensive hierarchical structure. The obtained results clarify what has been proposed in the literature so far and they partially confirm the qualitative parameters used to create previous taxonomies of hand grasps. According to the results, hand movements can be divided into five movement categories defined based on the overall grasp shape, finger positioning and muscular activation. Part of the results appears qualitatively in accordance with previous results describing kinematic hand grasping synergies. Conclusions: The taxonomy of hand grasps proposed in this paper clarifies with quantitative measurements what has been proposed in the field on a qualitative basis, thus having a potential impact on several scientific fields
    • …
    corecore