121 research outputs found
Coupling angle variability in healthy and patellofemoral pain runners
Background Patellofemoral pain is hypothesized to result in less joint coordination variability. The ability to relate coordination variability to patellofemoral pain pathology could have many clinical uses; however, evidence to support its clinical application is lacking. The aim was to determine if vector coding's coupling angle variability, as a measure of joint coordination variability, was less for runners with patellofemoral pain than healthy controls as is commonly postulated. Methods Nineteen female recreational runners with patellofemoral pain and eleven healthy controls performed a treadmill acclimation protocol then ran at a self-selected pace for 15 min. 3-D kinematics, force plate kinetics, knee pain and rating of perceived exertion were recorded each minute. Data were selected for the: pain group at the highest pain reached (pain � 3/10) in a non-exerted state (exertion < 14/20), and; non-exerted healthy group from the eleventh minute. Coupling angle variability was calculated over several portions of the stride for six knee-ankle combinations during five non-consecutive strides. Findings 46 of 48 coupling angle variability measures were greater for the pain group, with 7 significantly greater (P <.05). Interpretation These findings oppose the theory that less coupling angle variability is indicative of a pathological coordinate state during running. Greater coupling angle variability may be characteristic of patellofemoral pain in female treadmill running when a larger threshold of pain is reached than previously observed. A predictable and directional response of coupling angle variability measures in relation to knee pathology is not yet clear and requires further investigation prior to considerations for clinical utility. © 2013 Elsevier Ltd
Biomimetic rehabilitation engineering: the importance of somatosensory feedback for brain-machine interfaces.
Brain-machine interfaces (BMIs) re-establish communication channels between the nervous system and an external device. The use of BMI technology has generated significant developments in rehabilitative medicine, promising new ways to restore lost sensory-motor functions. However and despite high-caliber basic research, only a few prototypes have successfully left the laboratory and are currently home-deployed.
The failure of this laboratory-to-user transfer likely relates to the absence of BMI solutions for providing naturalistic feedback about the consequences of the BMI's actions. To overcome this limitation, nowadays cutting-edge BMI advances are guided by the principle of biomimicry; i.e. the artificial reproduction of normal neural mechanisms.
Here, we focus on the importance of somatosensory feedback in BMIs devoted to reproducing movements with the goal of serving as a reference framework for future research on innovative rehabilitation procedures. First, we address the correspondence between users' needs and BMI solutions. Then, we describe the main features of invasive and non-invasive BMIs, including their degree of biomimicry and respective advantages and drawbacks. Furthermore, we explore the prevalent approaches for providing quasi-natural sensory feedback in BMI settings. Finally, we cover special situations that can promote biomimicry and we present the future directions in basic research and clinical applications.
The continued incorporation of biomimetic features into the design of BMIs will surely serve to further ameliorate the realism of BMIs, as well as tremendously improve their actuation, acceptance, and use
Ipsilesional trajectory control is related to contralesional arm paralysis after left hemisphere damage
We have recently shown ipsilateral dynamic deficits in trajectory control are present in left hemisphere damaged (LHD) patients with paresis, as evidenced by impaired modulation of torque amplitude as response amplitude increases. The purpose of the current study is to determine if these ipsilateral deficits are more common with contralateral hemiparesis and greater damage to the motor system, as evidenced by structural imaging. Three groups of right-handed subjects (healthy controls, LHD stroke patients with and without upper extremity paresis) performed single-joint elbow movements of varying amplitudes with their left arm in the left hemispace. Only the paretic group demonstrated dynamic deficits characterized by decreased modulation of peak torque (reflected by peak acceleration changes) as response amplitude increased. These results could not be attributed to lesion volume or peak velocity as neither variable differed across the groups. However, the paretic group had damage to a larger number of areas within the motor system than the non-paretic group suggesting that such damage increases the probability of ipsilesional deficits in dynamic control for modulating torque amplitude after left hemisphere damage
Aging Affects the Mental Rotation of Left and Right Hands
BACKGROUND:Normal aging significantly influences motor and cognitive performance. Little is known about age-related changes in action simulation. Here, we investigated the influence of aging on implicit motor imagery. METHODOLOGY/PRINCIPAL FINDINGS:Twenty young (mean age: 23.9+/-2.8 years) and nineteen elderly (mean age: 78.3+/-4.5 years) subjects, all right-handed, were required to determine the laterality of hands presented in various positions. To do so, they mentally rotated their hands to match them with the hand-stimuli. We showed that: (1) elderly subjects were affected in their ability to implicitly simulate movements of the upper limbs, especially those requiring the largest amplitude of displacement and/or with strong biomechanical constraints; (2) this decline was greater for movements of the non-dominant arm than of the dominant arm. CONCLUSIONS/SIGNIFICANCE:These results extend recent findings showing age-related alterations of the explicit side of motor imagery. They suggest that a general decline in action simulation occurs with normal aging, in particular for the non-dominant side of the body
The Role of Motor Learning in Spatial Adaptation near a Tool
Some visual-tactile (bimodal) cells have visual receptive fields (vRFs) that overlap and extend moderately beyond the skin of the hand. Neurophysiological evidence suggests, however, that a vRF will grow to encompass a hand-held tool following active tool use but not after passive holding. Why does active tool use, and not passive holding, lead to spatial adaptation near a tool? We asked whether spatial adaptation could be the result of motor or visual experience with the tool, and we distinguished between these alternatives by isolating motor from visual experience with the tool. Participants learned to use a novel, weighted tool. The active training group received both motor and visual experience with the tool, the passive training group received visual experience with the tool, but no motor experience, and finally, a no-training control group received neither visual nor motor experience using the tool. After training, we used a cueing paradigm to measure how quickly participants detected targets, varying whether the tool was placed near or far from the target display. Only the active training group detected targets more quickly when the tool was placed near, rather than far, from the target display. This effect of tool location was not present for either the passive-training or control groups. These results suggest that motor learning influences how visual space around the tool is represented
The Proprioceptive Map of the Arm Is Systematic and Stable, but Idiosyncratic
Visual and somatosensory signals participate together in providing an estimate of the hand's spatial location. While the ability of subjects to identify the spatial location of their hand based on visual and proprioceptive signals has previously been characterized, relatively few studies have examined in detail the spatial structure of the proprioceptive map of the arm. Here, we reconstructed and analyzed the spatial structure of the estimation errors that resulted when subjects reported the location of their unseen hand across a 2D horizontal workspace. Hand position estimation was mapped under four conditions: with and without tactile feedback, and with the right and left hands. In the task, we moved each subject's hand to one of 100 targets in the workspace while their eyes were closed. Then, we either a) applied tactile stimulation to the fingertip by allowing the index finger to touch the target or b) as a control, hovered the fingertip 2 cm above the target. After returning the hand to a neutral position, subjects opened their eyes to verbally report where their fingertip had been. We measured and analyzed both the direction and magnitude of the resulting estimation errors. Tactile feedback reduced the magnitude of these estimation errors, but did not change their overall structure. In addition, the spatial structure of these errors was idiosyncratic: each subject had a unique pattern of errors that was stable between hands and over time. Finally, we found that at the population level the magnitude of the estimation errors had a characteristic distribution over the workspace: errors were smallest closer to the body. The stability of estimation errors across conditions and time suggests the brain constructs a proprioceptive map that is reliable, even if it is not necessarily accurate. The idiosyncrasy across subjects emphasizes that each individual constructs a map that is unique to their own experiences
Eye-Hand Coordination during Dynamic Visuomotor Rotations
Background
for many technology-driven visuomotor tasks such as tele-surgery, human operators face situations in which the frames of reference for vision and action are misaligned and need to be compensated in order to perform the tasks with the necessary precision. The cognitive mechanisms for the selection of appropriate frames of reference are still not fully understood. This study investigated the effect of changing visual and kinesthetic frames of reference during wrist pointing, simulating activities typical for tele-operations.
Methods
using a robotic manipulandum, subjects had to perform center-out pointing movements to visual targets presented on a computer screen, by coordinating wrist flexion/extension with abduction/adduction. We compared movements in which the frames of reference were aligned (unperturbed condition) with movements performed under different combinations of visual/kinesthetic dynamic perturbations. The visual frame of reference was centered to the computer screen, while the kinesthetic frame was centered around the wrist joint. Both frames changed their orientation dynamically (angular velocity\u200a=\u200a36\ub0/s) with respect to the head-centered frame of reference (the eyes). Perturbations were either unimodal (visual or kinesthetic), or bimodal (visual+kinesthetic). As expected, pointing performance was best in the unperturbed condition. The spatial pointing error dramatically worsened during both unimodal and most bimodal conditions. However, in the bimodal condition, in which both disturbances were in phase, adaptation was very fast and kinematic performance indicators approached the values of the unperturbed condition.
Conclusions
this result suggests that subjects learned to exploit an \u201caffordance\u201d made available by the invariant phase relation between the visual and kinesthetic frames. It seems that after detecting such invariance, subjects used the kinesthetic input as an informative signal rather than a disturbance, in order to compensate the visual rotation without going through the lengthy process of building an internal adaptation model. Practical implications are discussed as regards the design of advanced, high-performance man-machine interfaces
Vestibular signal processing in a subject with somatosensory deafferentation: The case of sitting posture
<p>Abstract</p> <p>Background</p> <p>The vestibular system of the inner ear provides information about head translation/rotation in space and about the orientation of the head with respect to the gravitoinertial vector. It also largely contributes to the control of posture through vestibulospinal pathways. Testing an individual severely deprived of somatosensory information below the nose, we investigated if equilibrium can be maintained while seated on the sole basis of this information.</p> <p>Results</p> <p>Although she was unstable, the deafferented subject (DS) was able to remain seated with the eyes closed in the absence of feet, arm and back supports. However, with the head unconsciously rotated towards the left or right shoulder, the DS's instability markedly increased. Small electrical stimulations of the vestibular apparatus produced large body tilts in the DS contrary to control subjects who did not show clear postural responses to the stimulations.</p> <p>Conclusion</p> <p>The results of the present experiment show that in the lack of vision and somatosensory information, vestibular signal processing allows the maintenance of an active sitting posture (i.e. without back or side rests). When head orientation changes with respect to the trunk, in the absence of vision, the lack of cervical information prevents the transformation of the head-centered vestibular information into a trunk-centered frame of reference of body motion. For the normal subjects, this latter frame of reference enables proper postural adjustments through vestibular signal processing, irrespectively of the orientation of the head with respect to the trunk.</p
Proprioceptive loss and the perception, control and learning of arm movements in humans: evidence from sensory neuronopathy
© 2018 The Author(s) It is uncertain how vision and proprioception contribute to adaptation of voluntary arm movements. In normal participants, adaptation to imposed forces is possible with or without vision, suggesting that proprioception is sufficient; in participants with proprioceptive loss (PL), adaptation is possible with visual feedback, suggesting that proprioception is unnecessary. In experiment 1 adaptation to, and retention of, perturbing forces were evaluated in three chronically deafferented participants. They made rapid reaching movements to move a cursor toward a visual target, and a planar robot arm applied orthogonal velocity-dependent forces. Trial-by-trial error correction was observed in all participants. Such adaptation has been characterized with a dual-rate model: a fast process that learns quickly, but retains poorly and a slow process that learns slowly and retains well. Experiment 2 showed that the PL participants had large individual differences in learning and retention rates compared to normal controls. Experiment 3 tested participants’ perception of applied forces. With visual feedback, the PL participants could report the perturbation’s direction as well as controls; without visual feedback, thresholds were elevated. Experiment 4 showed, in healthy participants, that force direction could be estimated from head motion, at levels close to the no-vision threshold for the PL participants. Our results show that proprioceptive loss influences perception, motor control and adaptation but that proprioception from the moving limb is not essential for adaptation to, or detection of, force fields. The differences in learning and retention seen between the three deafferented participants suggest that they achieve these tasks in idiosyncratic ways after proprioceptive loss, possibly integrating visual and vestibular information with individual cognitive strategies
- …