185 research outputs found

    A Synergy-Based Optimally Designed Sensing Glove for Functional Grasp Recognition

    Get PDF
    Achieving accurate and reliable kinematic hand pose reconstructions represents a challenging task. The main reason for this is the complexity of hand biomechanics, where several degrees of freedom are distributed along a continuous deformable structure. Wearable sensing can represent a viable solution to tackle this issue, since it enables a more natural kinematic monitoring. However, the intrinsic accuracy (as well as the number of sensing elements) of wearable hand pose reconstruction (HPR) systems can be severely limited by ergonomics and cost considerations. In this paper, we combined the theoretical foundations of the optimal design of HPR devices based on hand synergy information, i.e., the inter-joint covariation patterns, with textile goniometers based on knitted piezoresistive fabrics (KPF) technology, to develop, for the first time, an optimally-designed under-sensed glove for measuring hand kinematics. We used only five sensors optimally placed on the hand and completed hand pose reconstruction (described according to a kinematic model with 19 degrees of freedom) leveraging upon synergistic information. The reconstructions we obtained from five different subjects were used to implement an unsupervised method for the recognition of eight functional grasps, showing a high degree of accuracy and robustness

    Synergy-based Hand Pose Sensing: Reconstruction Enhancement

    Get PDF
    Low-cost sensing gloves for reconstruction posture provide measurements which are limited under several regards. They are generated through an imperfectly known model, are subject to noise, and may be less than the number of Degrees of Freedom (DoFs) of the hand. Under these conditions, direct reconstruction of the hand posture is an ill-posed problem, and performance can be very poor. This paper examines the problem of estimating the posture of a human hand using(low-cost) sensing gloves, and how to improve their performance by exploiting the knowledge on how humans most frequently use their hands. To increase the accuracy of pose reconstruction without modifying the glove hardware - hence basically at no extra cost - we propose to collect, organize, and exploit information on the probabilistic distribution of human hand poses in common tasks. We discuss how a database of such an a priori information can be built, represented in a hierarchy of correlation patterns or postural synergies, and fused with glove data in a consistent way, so as to provide a good hand pose reconstruction in spite of insufficient and inaccurate sensing data. Simulations and experiments on a low-cost glove are reported which demonstrate the effectiveness of the proposed techniques.Comment: Submitted to International Journal of Robotics Research (2012

    Synergy-driven performance enhancement of vision-based 3D hand pose reconstruction

    Get PDF
    In this work we propose, for the first time, to improve the performance of a Hand Pose Reconstruction (HPR) technique from RGBD camera data, which is affected by self-occlusions, leveraging upon postural synergy information, i.e., a priori information on how human most commonly use and shape their hands in everyday life tasks. More specifically, in our approach, we ignore joint angle values estimated with low confidence through a vision-based HPR technique and fuse synergistic information with such incomplete measures. Preliminary experiments are reported showing the effectiveness of the proposed integration

    Kinematic synergies of hand grasps ::a comprehensive study on a large publicly available dataset

    Get PDF
    Background: Hand grasp patterns require complex coordination. The reduction of the kinematic dimensionality is a key process to study the patterns underlying hand usage and grasping. It allows to define metrics for motor assessment and rehabilitation, to develop assistive devices and prosthesis control methods. Several studies were presented in this field but most of them targeted a limited number of subjects, they focused on postures rather than entire grasping movements and they did not perform separate analysis for the tasks and subjects, which can limit the impact on rehabilitation and assistive applications. This paper provides a comprehensive mapping of synergies from hand grasps targeting activities of daily living. It clarifies several current limits of the field and fosters the development of applications in rehabilitation and assistive robotics. Methods: In this work, hand kinematic data of 77 subjects, performing up to 20 hand grasps, were acquired with a data glove (a 22-sensor CyberGlove II data glove) and analyzed. Principal Component Analysis (PCA) and hierarchical cluster analysis were used to extract and group kinematic synergies that summarize the coordination patterns available for hand grasps. Results: Twelve synergies were found to account for > 80% of the overall variation. The first three synergies accounted for more than 50% of the total amount of variance and consisted of: the flexion and adduction of the Metacarpophalangeal joint (MCP) of fingers 3 to 5 (synergy #1), palmar arching and flexion of the wrist (synergy #2) and opposition of the thumb (synergy #3). Further synergies refine movements and have higher variability among subjects. Conclusion: Kinematic synergies are extracted from a large number of subjects (77) and grasps related to activities of daily living (20). The number of motor modules required to perform the motor tasks is higher than what previously described. Twelve synergies are responsible for most of the variation in hand grasping. The first three are used as primary synergies, while the remaining ones target finer movements (e.g. independence of thumb and index finger). The results generalize the description of hand kinematics, better clarifying several limits of the field and fostering the development of applications in rehabilitation and assistive robotics

    Synergy-Based Sensor Reduction for Recording the Whole Hand Kinematics

    Get PDF
    Simultaneous measurement of the kinematics of all hand segments is cumbersome due to sensor placement constraints, occlusions, and environmental disturbances. The aim of this study is to reduce the number of sensors required by using kinematic synergies, which are considered the basic building blocks underlying hand motions. Synergies were identified from the public KIN-MUS UJI database (22 subjects, 26 representative daily activities). Ten synergies per subject were extracted as the principal components explaining at least 95% of the total variance of the angles recorded across all tasks. The 220 resulting synergies were clustered, and candidate angles for estimating the remaining angles were obtained from these groups. Different combinations of candidates were tested and the one providing the lowest error was selected, its goodness being evaluated against kinematic data from another dataset (KINE-ADL BE-UJI). Consequently, the original 16 joint angles were reduced to eight: carpometacarpal flexion and abduction of thumb, metacarpophalangeal and interphalangeal flexion of thumb, proximal interphalangeal flexion of index and ring fingers, metacarpophalangeal flexion of ring finger, and palmar arch. Average estimation errors across joints were below 10% of the range of motion of each joint angle for all the activities. Across activities, errors ranged between 3.1% and 16.8%

    Optimal Reconstruction of Human Motion From Scarce Multimodal Data

    Get PDF
    Wearable sensing has emerged as a promising solution for enabling unobtrusive and ergonomic measurements of the human motion. However, the reconstruction performance of these devices strongly depends on the quality and the number of sensors, which are typically limited by wearability and economic constraints. A promising approach to minimize the number of sensors is to exploit dimensionality reduction approaches that fuse prior information with insufficient sensing signals, through minimum variance estimation. These methods were successfully used for static hand pose reconstruction, but their translation to motion reconstruction has not been attempted yet. In this work, we propose the usage of functional principal component analysis to decompose multimodal, time-varying motion profiles in terms of linear combinations of basis functions. Functional decomposition enables the estimation of the a priori covariance matrix, and hence the fusion of scarce and noisy measured data with a priori information. We also consider the problem of identifying which elemental variables to measure as the most informative for a given class of tasks. We applied our method to two different datasets of upper limb motion D1 (joint trajectories) and D2 (joint trajectories + EMG data) considering an optimal set of measures (four joints for D1 out of seven, three joints, and eight EMGs for D2 out of seven and twelve, respectively). We found that our approach enables the reconstruction of upper limb motion with a median error of 0.013±0.0060.013 \pm 0.006 rad for D1 (relative median error 0.9%), and 0.038±0.0230.038 \pm 0.023 rad and 0.003±0.0020.003 \pm 0.002 mV for D2 (relative median error 2.9% and 5.1%, respectively)

    On the Role of Haptic Synergies in Modelling the Sense of Touch and in Designing Artificial Haptic Systems

    Get PDF
    This thesis aims at defining strategies to reduce haptic information complexity, with minimum loss of information, to design more effective haptic interfaces and artificial systems. Nowadays, haptic device design can be complex. Moreover, the artificial reproduction of the full spectrum of haptic information is a daunting task and far to be achieved. The central idea of this work is to simplify this information by exploiting the concept of synergies, which has been developed to describe the covariation patterns in multi-digit movements and forces in common motor tasks. Here I extend and exploit it also in the perceptual domain, to find projections from the heterogeneous information manifold, generated by the mechanics of touch, and what can be actually perceived by humans. In this manner, design trade-off between costs, feasibility and quality of the rendered perception can be individuated. With this as motivation, referring to cutaneous sensing, I discuss the development of a fabric-based softness display inspired by ``Contact Area Spread Rate'' hypothesis as well as the characterization of an air-jet lump display method for Robot-assisted Minimally Invasive Surgery. Considering kinaesthesia, I analyze the problem of hand posture estimation from noisy and limited in number measures provided by low cost hand pose sensing devices. By using the information about how humans most frequently use their hands, system performance is enhanced and optimal system design enabled. Finally, an integrated device, where a conventional kinaesthetic haptic display is combined with a cutaneous softness one, is proposed, showing that the fidelity by which softness is artificially rendered increases

    Hand Pose Estimation with Mems-Ultrasonic Sensors

    Full text link
    Hand tracking is an important aspect of human-computer interaction and has a wide range of applications in extended reality devices. However, current hand motion capture methods suffer from various limitations. For instance, visual-based hand pose estimation is susceptible to self-occlusion and changes in lighting conditions, while IMU-based tracking gloves experience significant drift and are not resistant to external magnetic field interference. To address these issues, we propose a novel and low-cost hand-tracking glove that utilizes several MEMS-ultrasonic sensors attached to the fingers, to measure the distance matrix among the sensors. Our lightweight deep network then reconstructs the hand pose from the distance matrix. Our experimental results demonstrate that this approach is both accurate, size-agnostic, and robust to external interference. We also show the design logic for the sensor selection, sensor configurations, circuit diagram, as well as model architecture

    DeepDynamicHand: A Deep Neural Architecture for Labeling Hand Manipulation Strategies in Video Sources Exploiting Temporal Information

    Get PDF
    Humans are capable of complex manipulation interactions with the environment, relying on the intrinsic adaptability and compliance of their hands. Recently, soft robotic manipulation has attempted to reproduce such an extraordinary behavior, through the design of deformable yet robust end-effectors. To this goal, the investigation of human behavior has become crucial to correctly inform technological developments of robotic hands that can successfully exploit environmental constraint as humans actually do. Among the different tools robotics can leverage on to achieve this objective, deep learning has emerged as a promising approach for the study and then the implementation of neuro-scientific observations on the artificial side. However, current approaches tend to neglect the dynamic nature of hand pose recognition problems, limiting the effectiveness of these techniques in identifying sequences of manipulation primitives underpinning action generation, e.g., during purposeful interaction with the environment. In this work, we propose a vision-based supervised Hand Pose Recognition method which, for the first time, takes into account temporal information to identify meaningful sequences of actions in grasping and manipulation tasks. More specifically, we apply Deep Neural Networks to automatically learn features from hand posture images that consist of frames extracted from grasping and manipulation task videos with objects and external environmental constraints. For training purposes, videos are divided into intervals, each associated to a specific action by a human supervisor. The proposed algorithm combines a Convolutional Neural Network to detect the hand within each video frame and a Recurrent Neural Network to predict the hand action in the current frame, while taking into consideration the history of actions performed in the previous frames. Experimental validation has been performed on two datasets of dynamic hand-centric strategies, where subjects regularly interact with objects and environment. Proposed architecture achieved a very good classification accuracy on both datasets, reaching performance up to 94%, and outperforming state of the art techniques. The outcomes of this study can be successfully applied to robotics, e.g., for planning and control of soft anthropomorphic manipulators

    Machine learning-based dexterous control of hand prostheses

    Get PDF
    Upper-limb myoelectric prostheses are controlled by muscle activity information recorded on the skin surface using electromyography (EMG). Intuitive prosthetic control can be achieved by deploying statistical and machine learning (ML) tools to decipher the user’s movement intent from EMG signals. This thesis proposes various means of advancing the capabilities of non-invasive, ML-based control of myoelectric hand prostheses. Two main directions are explored, namely classification-based hand grip selection and proportional finger position control using regression methods. Several practical aspects are considered with the aim of maximising the clinical impact of the proposed methodologies, which are evaluated with offline analyses as well as real-time experiments involving both able-bodied and transradial amputee participants. It has been generally accepted that the EMG signal may not always be a reliable source of control information for prostheses, mainly due to its stochastic and non-stationary properties. One particular issue associated with the use of surface EMG signals for upper-extremity myoelectric control is the limb position effect, which is related to the lack of decoding generalisation under novel arm postures. To address this challenge, it is proposed to make concurrent use of EMG sensors and inertial measurement units (IMUs). It is demonstrated this can lead to a significant improvement in both classification accuracy (CA) and real-time prosthetic control performance. Additionally, the relationship between surface EMG and inertial measurements is investigated and it is found that these modalities are partially related due to reflecting different manifestations of the same underlying phenomenon, that is, the muscular activity. In the field of upper-limb myoelectric control, the linear discriminant analysis (LDA) classifier has arguably been the most popular choice for movement intent decoding. This is mainly attributable to its ease of implementation, low computational requirements, and acceptable decoding performance. Nevertheless, this particular method makes a strong fundamental assumption, that is, data observations from different classes share a common covariance structure. Although this assumption may often be violated in practice, it has been found that the performance of the method is comparable to that of more sophisticated algorithms. In this thesis, it is proposed to remove this assumption by making use of general class-conditional Gaussian models and appropriate regularisation to avoid overfitting issues. By performing an exhaustive analysis on benchmark datasets, it is demonstrated that the proposed approach based on regularised discriminant analysis (RDA) can offer an impressive increase in decoding accuracy. By combining the use of RDA classification with a novel confidence-based rejection policy that intends to minimise the rate of unintended hand motions, it is shown that it is feasible to attain robust myoelectric grip control of a prosthetic hand by making use of a single pair of surface EMG-IMU sensors. Most present-day commercial prosthetic hands offer the mechanical abilities to support individual digit control; however, classification-based methods can only produce pre-defined grip patterns, a feature which results in prosthesis under-actuation. Although classification-based grip control can provide a great advantage over conventional strategies, it is far from being intuitive and natural to the user. A potential way of approaching the level of dexterity enjoyed by the human hand is via continuous and individual control of multiple joints. To this end, an exhaustive analysis is performed on the feasibility of reconstructing multidimensional hand joint angles from surface EMG signals. A supervised method based on the eigenvalue formulation of multiple linear regression (MLR) is then proposed to simultaneously reduce the dimensionality of input and output variables and its performance is compared to that of typically used unsupervised methods, which may produce suboptimal results in this context. An experimental paradigm is finally designed to evaluate the efficacy of the proposed finger position control scheme during real-time prosthesis use. This thesis provides insight into the capacity of deploying a range of computational methods for non-invasive myoelectric control. It contributes towards developing intuitive interfaces for dexterous control of multi-articulated prosthetic hands by transradial amputees
    • …
    corecore