6,563 research outputs found

    The Arm Motion (AMD) Detection Test

    Get PDF
    Stroke can lead to sensory deficits that impair functional control of arm movements. Here we describe a simple test of arm motion detection (AMD) that provides an objective, quantitative measure of movement perception related proprioceptive capabilities in the arm. Seven stroke survivors and thirteen neurologically intact control subjects performed the AMD test. In a series of ten trials that took less than 15 minutes to complete, participants used a two-button user interface to adjust the magnitude of hand displacements produced by a horizontal planar robot until the motions were just perceptible (i.e. on the threshold of detection). The standard deviation of movement detection threshold was plotted against the mean and a normative range was determined from the data collected with control subjects. Within this normative space, subjects with and without intact proprioception could be discriminated on a ratio scale that is meaningful for ongoing studies of degraded motor function. Thus, the AMD test provides a relatively fast, objective and quantitative measure of upper extremity proprioception of limb movement (i.e. kinesthesia)

    Computational neurorehabilitation: modeling plasticity and learning to predict recovery

    Get PDF
    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling – regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity

    On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation

    Full text link
    Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas

    Fast human motion prediction for human-robot collaboration with wearable interfaces

    Full text link
    In this paper, we aim at improving human motion prediction during human-robot collaboration in industrial facilities by exploiting contributions from both physical and physiological signals. Improved human-machine collaboration could prove useful in several areas, while it is crucial for interacting robots to understand human movement as soon as possible to avoid accidents and injuries. In this perspective, we propose a novel human-robot interface capable to anticipate the user intention while performing reaching movements on a working bench in order to plan the action of a collaborative robot. The proposed interface can find many applications in the Industry 4.0 framework, where autonomous and collaborative robots will be an essential part of innovative facilities. A motion intention prediction and a motion direction prediction levels have been developed to improve detection speed and accuracy. A Gaussian Mixture Model (GMM) has been trained with IMU and EMG data following an evidence accumulation approach to predict reaching direction. Novel dynamic stopping criteria have been proposed to flexibly adjust the trade-off between early anticipation and accuracy according to the application. The output of the two predictors has been used as external inputs to a Finite State Machine (FSM) to control the behaviour of a physical robot according to user's action or inaction. Results show that our system outperforms previous methods, achieving a real-time classification accuracy of 94.3±2.9%94.3\pm2.9\% after 160.0msec±80.0msec160.0msec\pm80.0msec from movement onset

    Wearable MIMUs for the identification of upper limbs motion in an industrial context of human-robot interaction

    Get PDF
    The automation of human gestures is gaining increasing importance in manufacturing. Indeed, robots support operators by simplifying their tasks in a shared workspace. However, human-robot collaboration can be improved by identifying human actions and then developing adaptive control algorithms for the robot. Accordingly, the aim of this study was to classify industrial tasks based on accelerations signals of human upper limbs. Two magnetic inertial measurement units (MIMUs) on the upper limb of ten healthy young subjects acquired pick and place gestures at three different heights. Peaks were detected from MIMUs accelerations and were adopted to classify gestures through a Linear Discriminant Analysis. The method was applied firstly including two MIMUs and then one at a time. Results demonstrated that the placement of at least one MIMU on the upper arm or forearm is suitable to achieve good recognition performances. Overall, features extracted from MIMUs signals can be used to define and train a prediction algorithm reliable for the context of collaborative robotics
    • …
    corecore