6,539 research outputs found

    Inter-Joint Coordination Deficits Revealed in the Decomposition of Endpoint Jerk During Goal-Directed Arm Movement After Stroke

    Get PDF
    It is well documented that neurological deficits after stroke can disrupt motor control processes that affect the smoothness of reaching movements. The smoothness of hand trajectories during multi-joint reaching depends on shoulder and elbow joint angular velocities and their successive derivatives as well as on the instantaneous arm configuration and its rate of change. Right-handed survivors of unilateral hemiparetic stroke and neurologically-intact control participants held the handle of a two-joint robot and made horizontal planar reaching movements. We decomposed endpoint jerk into components related to shoulder and elbow joint angular velocity, acceleration, and jerk. We observed an abnormal decomposition pattern in the most severely impaired stroke survivors consistent with deficits of inter-joint coordination. We then used numerical simulations of reaching movements to test whether the specific pattern of inter-joint coordination deficits observed experimentally could be explained by either a general increase in motor noise related to weakness or by an impaired ability to compensate for multi-joint interaction torque. Simulation results suggest that observed deficits in movement smoothness after stroke more likely reflect an impaired ability to compensate for multi-joint interaction torques rather than the mere presence of elevated motor noise

    Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives

    Get PDF
    Copyright ©2014 Zhong, Cangelosi and Wermter.This is an open-access article distributed under the terms of the Creative Commons Attribution License (CCBY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these termsThe acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context.Peer reviewedFinal Published versio

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Spatio-Temporal Avoidance of Predicted Occupancy in Human-Robot Collaboration

    Full text link
    This paper addresses human-robot collaboration (HRC) challenges of integrating predictions of human activity to provide a proactive-n-reactive response capability for the robot. Prior works that consider current or predicted human poses as static obstacles are too nearsighted or too conservative in planning, potentially causing delayed robot paths. Alternatively, time-varying prediction of human poses would enable robot paths that avoid anticipated human poses, synchronized dynamically in time and space. Herein, a proactive path planning method, denoted STAP, is presented that uses spatiotemporal human occupancy maps to find robot trajectories that anticipate human movements, allowing robot passage without stopping. In addition, STAP anticipates delays from robot speed restrictions required by ISO/TS 15066 speed and separation monitoring (SSM). STAP also proposes a sampling-based planning algorithm based on RRT* to solve the spatio-temporal motion planning problem and find paths of minimum expected duration. Experimental results show STAP generates paths of shorter duration and greater average robot-human separation distance throughout tasks. Additionally, STAP more accurately estimates robot trajectory durations in HRC, which are useful in arriving at proactive-n-reactive robot sequencing.Comment: 7 pages, 7 figures. Accepted at IEEE ROMAN 202

    Learning Multimodal Latent Dynamics for Human-Robot Interaction

    Full text link
    This article presents a method for learning well-coordinated Human-Robot Interaction (HRI) from Human-Human Interactions (HHI). We devise a hybrid approach using Hidden Markov Models (HMMs) as the latent space priors for a Variational Autoencoder to model a joint distribution over the interacting agents. We leverage the interaction dynamics learned from HHI to learn HRI and incorporate the conditional generation of robot motions from human observations into the training, thereby predicting more accurate robot trajectories. The generated robot motions are further adapted with Inverse Kinematics to ensure the desired physical proximity with a human, combining the ease of joint space learning and accurate task space reachability. For contact-rich interactions, we modulate the robot's stiffness using HMM segmentation for a compliant interaction. We verify the effectiveness of our approach deployed on a Humanoid robot via a user study. Our method generalizes well to various humans despite being trained on data from just two humans. We find that Users perceive our method as more human-like, timely, and accurate and rank our method with a higher degree of preference over other baselines.Comment: 20 Pages, 10 Figure

    The separate neural control of hand movements and contact forces

    Get PDF
    To manipulate an object, we must simultaneously control the contact forces exerted on the object and the movements of our hand. Two alternative views for manipulation have been proposed: one in which motions and contact forces are represented and controlled by separate neural processes, and one in which motions and forces are controlled jointly, by a single process. To evaluate these alternatives, we designed three tasks in which subjects maintained a specified contact force while their hand was moved by a robotic manipulandum. The prescribed contact force and hand motions were selected in each task to induce the subject to attain one of three goals: (1) exerting a regulated contact force, (2) tracking the motion of the manipulandum, and (3) attaining both force and motion goals concurrently. By comparing subjects' performances in these three tasks, we found that behavior was captured by the summed actions of two independent control systems: one applying the desired force, and the other guiding the hand along the predicted path of the manipulandum. Furthermore, the application of transcranial magnetic stimulation impulses to the posterior parietal cortex selectively disrupted the control of motion but did not affect the regulation of static contact force. Together, these findings are consistent with the view that manipulation of objects is performed by independent brain control of hand motions and interaction forces
    • …
    corecore