6 research outputs found

    A two DoF finger for a biomechatronic artificial hand

    Get PDF
    Current prosthetic hands are basically simple grippers with one or two degrees of freedom, which barely restore the capability of the thumb-index pinch. Although most amputees consider this performance as acceptable for usual tasks, there is ample room for improvement by exploiting recent progresses in mechatronics design and technology. We are developing a novel prosthetic hand featured by multiple degrees of freedom, tactile sensing capabilities, and distributed control. Our main goal is to pursue an integrated design approach in order to fulfill critical requirements such as cosmetics, controllability, low weight, low energy consumption and noiselessness. This approach can be synthesized by the definition "biomechatronic design", which means developing mechatronic systems inspired by living beings and able to work harmoniously with them. This paper describes the first implementation of one single finger of a future biomechatronic hand. The finger has a modular design, which allows to obtain hands with different degrees of freedom and grasping capabilities. Current developments include the implementation of a hand comprising three fingers (opposing thumb, index and middle) and an embedded controller

    On the development of a cybernetic prosthetic hand

    Get PDF
    The human hand is the end organ of the upper limb, which in humans serves the important function of prehension, as well as being an important organ for sensation and communication. It is a marvellous example of how a complex mechanism can be implemented, capable of realizing very complex and useful tasks using a very effective combination of mechanisms, sensing, actuation and control functions. In this thesis, the road towards the realization of a cybernetic hand has been presented. After a detailed analysis of the model, the human hand, a deep review of the state of the art of artificial hands has been carried out. In particular, the performance of prosthetic hands used in clinical practice has been compared with the research prototypes, both for prosthetic and for robotic applications. By following a biomechatronic approach, i.e. by comparing the characteristics of these hands with the natural model, the human hand, the limitations of current artificial devices will be put in evidence, thus outlining the design goals for a new cybernetic device. Three hand prototypes with a high number of degrees of freedom have been realized and tested: the first one uses microactuators embedded inside the structure of the fingers, and the second and third prototypes exploit the concept of microactuation in order to increase the dexterity of the hand while maintaining the simplicity for the control. In particular, a framework for the definition and realization of the closed-loop electromyographic control of these devices has been presented and implemented. The results were quite promising, putting in evidence that, in the future, there could be two different approaches for the realization of artificial devices. On one side there could be the EMG-controlled hands, with compliant fingers but only one active degree of freedom. On the other side, more performing artificial hands could be directly interfaced with the peripheral nervous system, thus establishing a bi-directional communication with the human brain

    Sensory Feedback for Upper-Limb Prostheses:Opportunities and Barriers

    Get PDF
    The addition of sensory feedback to upper-limb prostheses has been shown to improve control, increase embodiment, and reduce phantom limb pain. However, most commercial prostheses do not incorporate sensory feedback due to several factors. This paper focuses on the major challenges of a lack of deep understanding of user needs, the unavailability of tailored, realistic outcome measures and the segregation between research on control and sensory feedback. The use of methods such as the Person-Based Approach and co-creation can improve the design and testing process. Stronger collaboration between researchers can integrate different prostheses research areas to accelerate the translation process

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces
    corecore