47 research outputs found

    A quantitative taxonomy of human hand grasps

    Get PDF
    Background: A proper modeling of human grasping and of hand movements is fundamental for robotics, prosthetics, physiology and rehabilitation. The taxonomies of hand grasps that have been proposed in scientific literature so far are based on qualitative analyses of the movements and thus they are usually not quantitatively justified. Methods: This paper presents to the best of our knowledge the first quantitative taxonomy of hand grasps based on biomedical data measurements. The taxonomy is based on electromyography and kinematic data recorded from 40 healthy subjects performing 20 unique hand grasps. For each subject, a set of hierarchical trees are computed for several signal features. Afterwards, the trees are combined, first into modality-specific (i.e. muscular and kinematic) taxonomies of hand grasps and then into a general quantitative taxonomy of hand movements. The modality-specific taxonomies provide similar results despite describing different parameters of hand movements, one being muscular and the other kinematic. Results: The general taxonomy merges the kinematic and muscular description into a comprehensive hierarchical structure. The obtained results clarify what has been proposed in the literature so far and they partially confirm the qualitative parameters used to create previous taxonomies of hand grasps. According to the results, hand movements can be divided into five movement categories defined based on the overall grasp shape, finger positioning and muscular activation. Part of the results appears qualitatively in accordance with previous results describing kinematic hand grasping synergies. Conclusions: The taxonomy of hand grasps proposed in this paper clarifies with quantitative measurements what has been proposed in the field on a qualitative basis, thus having a potential impact on several scientific fields

    Scylax of Caryanda, Pseudo-Scylax, and the Paris Periplus: Reconsidering the Ancient Tradition of a Geographical Text

    Get PDF
    The Periplus preserved in the manuscript Parisinus suppl. gr. 443, and erroneously ascribed to Scylax of Caryanda (sixth century BC), is the oldest extant specimen of ancient Greek periplography: it belongs to the second half of the fourth century. In the present article, all the testimonies on the ancient tradition of both Scylax and the Paris Periplus are carefully evaluated. The aim is to determine when and why the Paris Periplus was mistakenly ascribed to Scylax and to clear any doubts on the alleged authorship of this ancient geographic work. The confusion, or the wilful falsification, is evident in Strabo: he knew of Scylax’s voyage in the East and at the same time was acquainted with the text of the Paris Periplus, which he ascribed to this famous ancient seafarer. Greek and Latin authors of the Roman Imperial age knew the Paris Periplus, but many followed slavishly the erroneous ascription to Scylax of Caryanda. When Marcianus of Heraclea in the early Byzantine age collected his corpus of ancient Greek geographers he also ascribed the Paris Periplus to Scylax, thus handing down the error to the copyist of the Paris. suppl. gr. 443

    Gaze, behavioral, and clinical data for phantom limbs after hand amputation from 15 amputees and 29 controls

    Full text link
    Despite recent advances in prosthetics, many upper limb amputees still use prostheses with some reluctance. They often do not feel able to incorporate the artificial hand into their bodily self. Furthermore, prosthesis fitting is not usually tailored to accommodate the characteristics of an individual's phantom limb sensations. These are experienced by almost all persons with an acquired amputation and comprise the motor and postural properties of the lost limb. This article presents and validates a multimodal dataset including an extensive qualitative and quantitative assessment of phantom limb sensations in 15 transradial amputees, surface electromyography and accelerometry data of the forearm, and measurements of gaze behavior during exercises requiring pointing or repositioning of the forearm and the phantom hand. The data also include acquisitions from 29 able-bodied participants, matched for gender and age. Special emphasis was given to tracking the visuo-motor coupling between eye-hand/eye-phantom during these exercises

    Gaze, visual, myoelectric, and inertial data of grasps for intelligent prosthetics

    Get PDF
    A hand amputation is a highly disabling event, having severe physical and psychological repercussions on a person’s life. Despite extensive efforts devoted to restoring the missing functionality via dexterous myoelectric hand prostheses, natural and robust control usable in everyday life is still challenging. Novel techniques have been proposed to overcome the current limitations, among them the fusion of surface electromyography with other sources of contextual information. We present a dataset to investigate the inclusion of eye tracking and first person video to provide more stable intent recognition for prosthetic control. This multimodal dataset contains surface electromyography and accelerometry of the forearm, and gaze, first person video, and inertial measurements of the head recorded from 15 transradial amputees and 30 able-bodied subjects performing grasping tasks. Besides the intended application for upper-limb prosthetics, we also foresee uses for this dataset to study eye-hand coordination in the context of psychophysics, neuroscience, and assistive robotics

    Gaze, visual, myoelectric, and inertial data of grasps for intelligent prosthetics

    Get PDF
    A hand amputation is a highly disabling event, having severe physical and psychological repercussions on a person’s life. Despite extensive efforts devoted to restoring the missing functionality via dexterous myoelectric hand prostheses, natural and robust control usable in everyday life is still challenging. Novel techniques have been proposed to overcome the current limitations, among them the fusion of surface electromyography with other sources of contextual information. We present a dataset to investigate the inclusion of eye tracking and first person video to provide more stable intent recognition for prosthetic control. This multimodal dataset contains surface electromyography and accelerometry of the forearm, and gaze, first person video, and inertial measurements of the head recorded from 15 transradial amputees and 30 able-bodied subjects performing grasping tasks. Besides the intended application for upper-limb prosthetics, we also foresee uses for this dataset to study eye-hand coordination in the context of psychophysics, neuroscience, and assistive robotics

    Experimental validation of Xsens inertial sensors during clinical and sport motion capture applications

    Get PDF
    Nowadays stereophotogrammetric system is the most employed in biomechanical laboratories: it is considered the golden standard for its accuracy even if it presents some. The interest on Inertial hybrid sensors is growing both considering entertainment applications but also biomechanical ones. The main advantage of such instruments is the outdoor employment with no limit of operating volume. In this way it is possible to record real movements in their proper environment. Therefore the first aim of the present work was to evaluate the accuracy of the inertial system MTw developed by Xsens Technologies in clinical and sport applications. The followed approach was to compare technical frames of both MTws and optoelectronical system . The second aim was to define the anatomical rotation axes to obtain the most important data in clinical application: the anatomical angles calculated by joint coordinates syste

    MeganePro Script Dataset (MDSScript)

    No full text
    Scripts used to process the raw data, to validate the data contained in MDS2 and MDS4, and to calibrate the accelerometers and gyroscope of the devices

    Head-mounted eye gaze tracking devices ::an overview of modern devices and recent advances

    No full text
    An increasing number of wearable devices performing eye gaze tracking have been released in recent years. Such devices can lead to unprecedented opportunities in many applications. However, staying updated regarding the continuous advances and gathering the technical features that allow to choose the best device for a specific application is not trivial. The last eye gaze tracker overview was written more than 10 years ago, while more recent devices are substantially improved both in hardware and software. Thus, an overview of current eye gaze trackers is needed. This review fills the gap by providing an overview of the current level of advancement for both techniques and devices, leading finally to the analysis of 20 essential features in six head-mounted eye gaze trackers commercially available. The analyzed characteristics represent a useful selection providing an overview of the technology currently implemented. The results show that many technical advances were made in this field since the last survey. Current wearable devices allow to capture and exploit visual information unobtrusively and in real time, leading to new applications in wearable technologies that can also be used to improve rehabilitation and enable a more active living for impaired persons

    Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping

    No full text
    The complexity and dexterity of the human hand make the development of natural and robust control of hand prostheses challenging. Although a large number of control approaches were developed and investigated in the last decades, limited robustness in real-life conditions often prevented their application in clinical settings and in commercial products. In this paper, we investigate a multimodal approach that exploits the use of eye-hand coordination to improve the control of myoelectric hand prostheses. The analyzed data are from the publicly available MeganePro Dataset 1, that includes multimodal data from transradial amputees and able-bodied subjects while grasping numerous household objects with ten grasp types. A continuous grasp-type classification based on surface electromyography served as both intent detector and classifier. At the same time, the information provided by eye-hand coordination parameters, gaze data and object recognition in first-person videos allowed to identify the object a person aims to grasp. The results show that the inclusion of visual information significantly increases the average offline classification accuracy by up to 15.61 ± 4.22% for the transradial amputees and of up to 7.37 ± 3.52% for the able-bodied subjects, allowing trans-radial amputees to reach average classification accuracy comparable to intact subjects and suggesting that the robustness of hand prosthesis control based on grasp-type recognition can be significantly improved with the inclusion of visual information extracted by leveraging natural eye-hand coordination behavior and without placing additional cognitive burden on the user
    corecore