1,413 research outputs found

    Continuous Semi-autonomous Prosthesis Control using a Depth Sensor on the Hand

    Get PDF

    Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands.</p> <p>Methods</p> <p>The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances.</p> <p>Results</p> <p>The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only).</p> <p>Conclusions</p> <p>The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic control eases the burden from the user and, as a result, the user can concentrate on what he/she does, not on how he/she should do it. The tests showed that the performance of the controller was satisfactory and that the users were able to operate the system with minimal prior training.</p

    Semi-autonomous control of prosthetic hands based on multimodal sensing, human grasp demonstration and user intention

    Get PDF
    Semi-autonomous control strategies for prosthetic hands provide a promising way to simplify and improve the grasping process for the user by adopting techniques usually applied in robotic grasping. Such strategies endow prosthetic hands with the ability to autonomously select and execute grasps while keeping the user in the loop to intervene at any time for triggering, accepting or rejecting decisions taken by the controller in an intuitive and easy way. In this paper, we present a semi-autonomous control strategy that allows the user to perform fluent grasping of everyday objects based on a single EMG channel and a multi-modal sensor system embedded in the hand for object perception and autonomous grasp execution. We conduct a user study with 20 subjects to assess the effectiveness and intuitiveness of our semi-autonomous control strategy and compare it to a conventional electromyography-based control strategy. The results show that the workload is reduced by 25.9 % compared to conventional electromyographic control, the physical demand is reduced by 60 % and the grasping process is accelerated by 19.4 %

    Impact of Shared Control Modalities on Performance and Usability of Semi-autonomous Prostheses

    Get PDF
    Semi-autonomous (SA) control of upper-limb prostheses can improve the performance and decrease the cognitive burden of a user. In this approach, a prosthesis is equipped with additional sensors (e.g., computer vision) that provide contextual information and enable the system to accomplish some tasks automatically. Autonomous control is fused with a volitional input of a user to compute the commands that are sent to the prosthesis. Although several promising prototypes demonstrating the potential of this approach have been presented, methods to integrate the two control streams (i.e., autonomous and volitional) have not been systematically investigated. In the present study, we implemented three shared control modalities (i.e., sequential, simultaneous, and continuous) and compared their performance, as well as the cognitive and physical burdens imposed on the user. In the sequential approach, the volitional input disabled the autonomous control. In the simultaneous approach, the volitional input to a specific degree of freedom (DoF) activated autonomous control of other DoFs, whereas in the continuous approach, autonomous control was always active except for the DoFs controlled by the user. The experiment was conducted in ten able-bodied subjects, and these subjects used an SA prosthesis to perform reach-and-grasp tasks while reacting to audio cues (dual tasking). The results demonstrated that, compared to the manual baseline (volitional control only), all three SA modalities accomplished the task in a shorter time and resulted in less volitional control input. The simultaneous SA modality performed worse than the sequential and continuous SA approaches. When systematic errors were introduced in the autonomous controller to generate a mismatch between the goals of the user and controller, the performance of SA modalities substantially decreased, even below the manual baseline. The sequential SA scheme was the least impacted one in terms of errors. The present study demonstrates that a specific approach for integrating volitional and autonomous control is indeed an important factor that significantly affects the performance and physical and cognitive load, and therefore these should be considered when designing SA prostheses

    Quantifying prosthetic and intact limb use in upper limb amputees via egocentric video: an unsupervised, at-home study

    Get PDF
    Analysis of the manipulation strategies employed by upper-limb prosthetic device users can provide valuable insights into the shortcomings of current prosthetic technology or therapeutic interventions. Typically, this problem has been approached with survey or lab-based studies, whose prehensile-grasp-focused results do not necessarily give accurate representations of daily activity. In this work, we capture prosthesis-user behavior in the unstructured and familiar environments of the participants own homes. Compact head-mounted video cameras recorded ego-centric views of the hands during self-selected household chores. Over 60 hours of video was recorded from 8 persons with unilateral amputation or limb difference (6 transradial, 1 transhumeral, 1 shoulder). Of this, almost 16 hours of video data was analyzed by human experts using the 22-category ‘TULIP’ custom manipulation taxonomy, producing the type and duration of over 27,000 prehensile and non-prehensile manipulation tags on both upper limbs, permitting a level of objective analysis not previously possible with this population. Our analysis included unique observations on non-prehensile manipulations occurrence, determining that 79% of transradial body-powered device manipulations were non-prehensile, compared to 60% for transradial myoelectric devices. Conversely, only 16-19% of intact limb activity was non-prehensile. Additionally, multi-grasp terminal devices did not lead to increased activity compared to 1DOF devices

    Robotic Grasping of Unknown Objects

    Get PDF
    corecore