230 research outputs found

    A Biomechanical Model for the Development of Myoelectric Hand Prosthesis Control Systems

    Get PDF
    Advanced myoelectric hand prostheses aim to reproduce as much of the human hand's functionality as possible. Development of the control system of such a prosthesis is strongly connected to its mechanical design; the control system requires accurate information on the prosthesis' structure and the surrounding environment, which can make development difficult without a finalized mechanical prototype. This paper presents a new framework for the development of electromyographic hand control systems, consisting of a prosthesis model based on the biomechanical structure of the human hand. The model's dynamic structure uses an ellipsoidal representation of the phalanges. Other features include underactuation in the fingers and thumb modeled with bond graphs, and a viscoelastic contact model. The model's functions are demonstrated by the execution of lateral and tripod grasps, and evaluated with regard to joint dynamics and applied forces. Finally, additions are suggested with which this model can be of use in mechanical design and patient training as well

    Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping

    Get PDF
    The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move an object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, moving an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasps reliable is more complex than for reaches, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure

    A dynamic model for action understanding and goal-directed imitation

    Get PDF
    The understanding of other individuals' actions is a fundamental cognitive skill for all species living in social groups. Recent neurophysiological evidence suggests that an observer may achieve the understanding by mapping visual information onto his own motor repertoire to reproduce the action effect. However, due to differences in embodiment, environmental constraints or motor skills, this mapping very often cannot be direct. In this paper, we present a dynamic network model which represents in its layers the functionality of neurons in different interconnected brain areas known to be involved in action observation/execution tasks. The model aims at substantiating the idea that action understanding is a continuous process which combines sensory evidence, prior task knowledge and a goal-directed matching of action observation and action execution. The model is tested in variations of an imitation task in which an observer with dissimilar embodiment tries to reproduce the perceived or inferred end-state of a grasping-placing sequence. We also propose and test a biologically plausible learning scheme which allows establishing during practice a goal-directed organization of the distributed network. The modeling results are discussed with respect to recent experimental findings in action observation/execution studies.European Commission JAST project IST-2-003747-I

    A Continuous Grasp Representation for the Imitation Learning of Grasps on Humanoid Robots

    Get PDF
    Models and methods are presented which enable a humanoid robot to learn reusable, adaptive grasping skills. Mechanisms and principles in human grasp behavior are studied. The findings are used to develop a grasp representation capable of retaining specific motion characteristics and of adapting to different objects and tasks. Based on the representation a framework is proposed which enables the robot to observe human grasping, learn grasp representations, and infer executable grasping actions

    Robust control of flexible space vehicles with minimum structural excitation: On-off pulse control of flexible space vehicles

    Get PDF
    Both feedback and feedforward control approaches for uncertain dynamical systems (in particular, with uncertainty in structural mode frequency) are investigated. The control objective is to achieve a fast settling time (high performance) and robustness (insensitivity) to plant uncertainty. Preshaping of an ideal, time optimal control input using a tapped-delay filter is shown to provide a fast settling time with robust performance. A robust, non-minimum-phase feedback controller is synthesized with particular emphasis on its proper implementation for a non-zero set-point control problem. It is shown that a properly designed, feedback controller performs well, as compared with a time optimal open loop controller with special preshaping for performance robustness. Also included are two separate papers by the same authors on this subject

    Quantifying prehension in persons with stroke post rehabilitation

    Get PDF
    This study describes the analysis of reaching and grasping abilities of the hemiparetic arm and hand of patients post stroke after a series of interactive virtual reality (VR) simulated training sessions and conventional physical therapy of similar intensity. Six subjects participated in VR training and five subjects in clinical rehabilitation for two weeks. Subjects’ finger joint angles were measured during a kinematic reach to grasp test using CyberGlove® and arm joint angles were measured using the trackSTARTM system prior to training and after training. Downward force applied to the object during grasping was assessed using Nano17TM, a force/torque sensor system that is added to the reach to grasp test paradigm for the VR trained subjects. Results from total movement time, grasping time, and average applied force show that subjects significantly decreased their average kinematic times and force applied to object during reaching and grasping tasks. Classification of hand postures using Linear Discriminant Analysis (LDA) during the reaching phase of movement shows an improvement in subjects’ accuracies and abilities to preshape their fingers post training in both groups. A system utilizing magnetic trackers, a data glove, and a force sensor is sensitive to changes in motor performance elicited by a robotically facilitated, virtually simulated motor intervention and physical therapy of similar intensity

    Cognitive vision system for control of dexterous prosthetic hands: Experimental evaluation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Dexterous prosthetic hands that were developed recently, such as SmartHand and i-LIMB, are highly sophisticated; they have individually controllable fingers and the thumb that is able to abduct/adduct. This flexibility allows implementation of many different grasping strategies, but also requires new control algorithms that can exploit the many degrees of freedom available. The current study presents and tests the operation of a new control method for dexterous prosthetic hands.</p> <p>Methods</p> <p>The central component of the proposed method is an autonomous controller comprising a vision system with rule-based reasoning mounted on a dexterous hand (CyberHand). The controller, termed cognitive vision system (CVS), mimics biological control and generates commands for prehension. The CVS was integrated into a hierarchical control structure: 1) the user triggers the system and controls the orientation of the hand; 2) a high-level controller automatically selects the grasp type and size; and 3) an embedded hand controller implements the selected grasp using closed-loop position/force control. The operation of the control system was tested in 13 healthy subjects who used Cyberhand, attached to the forearm, to grasp and transport 18 objects placed at two different distances.</p> <p>Results</p> <p>The system correctly estimated grasp type and size (nine commands in total) in about 84% of the trials. In an additional 6% of the trials, the grasp type and/or size were different from the optimal ones, but they were still good enough for the grasp to be successful. If the control task was simplified by decreasing the number of possible commands, the classification accuracy increased (e.g., 93% for guessing the grasp type only).</p> <p>Conclusions</p> <p>The original outcome of this research is a novel controller empowered by vision and reasoning and capable of high-level analysis (i.e., determining object properties) and autonomous decision making (i.e., selecting the grasp type and size). The automatic control eases the burden from the user and, as a result, the user can concentrate on what he/she does, not on how he/she should do it. The tests showed that the performance of the controller was satisfactory and that the users were able to operate the system with minimal prior training.</p
    corecore