2 research outputs found

    Understanding the hand: robotics, virtual reality and the neuroscience of control

    Get PDF
    Our primary interaction with the world is through the spontaneous generation of movement. In particular, the hand is a critical tool allowing humans to perform a myriad of dexterous manipulations. The brain's ability to control and coordinate movement across the multiple joints of the hand seems effortless and it remains unrivalled by any other artificial system when confronted with such a high dimensional-structure. Thus, replicating the naturalistic control functionality in a prosthetic hand poses a major challenge. While previous research focused on studying the hand in constrained laboratory conditions, this thesis is focused on analysing hand movements obtained from free behaviour. Using various statistical analyses, the kinematic structure of hand movements is examined across participants. By exploiting the kinematic correlation of natural hand movements, we demonstrated the capability of a data driven method to efficiently reconstruct missing digits using information from intact fingers. Subsequently, to overcome the laborious process required to effectively label free behaviour, a semi-supervised approach was developed allowing us to accurately measure the frequency and duration of various grasps throughout everyday life. Through further analysis, we found no significant differences in terms of movement complexity between dominant and non-dominant hand. Additionally, we found that a small number of kinematic synergies are consisted across participants yet the rest of their kinematic spaces seemed to be dominated by individuality. To address these findings, we developed an ecologically valid Virtual Reality (VR) platform (EthoPlatform) that directly involves the human in the control loop. Using EthoPlatform, subjects' performances were objectively quantified while their joint angles were manipulated through dimensionality reduction algorithms and reconstructed on the VR hand in real-time. Using this paradigm, we tested the ability of personalised models versus models trained on databases of aggregate data. We found that models from dominant and non-dominant hand produce similar performances as opposed to models trained on an aggregate of data. Subsequently, we examined the impact of non-linear dimensionality reduction algorithms on human performance. From this experiment, we found that reconstruction error is directly related to subjects' performances. These findings are of high importance for the development of prosthetic hand controllers, as we now can create the most tailored model to control the prosthesis via the dataset acquired from the non-amputated hand. Furthermore, we can deploy non-linear models that can better capture the kinematic structure of hand movements while requiring less information. To validate the results obtained from the VR experiments as well as to close the gap between experimental paradigms and real world applications, a dexterous artificial hand (EthoHand) was developed. Contrary to other robot hands that focus on grasping, EthoHand was developed to facilitate complex in-hand manipulations. For this reason, an appropriate thumb articulation was implemented allowing EthoHand to go beyond grasping and to perform multi-planar thumb movements similar to the real hand. Lastly, the ability of dimensionality reduction algorithms to successfully control a dexterous artificial hand was demonstrated on the physical hand. The findings throughout this thesis demonstrate a coherent approach for analysing the complex structure of natural hand movements. Furthermore, an alternative avenue towards developing superior prosthetic hand controllers is provided through personalised models.Open Acces

    Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    Get PDF
    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision
    corecore