53 research outputs found

    Interactive visuo-motor therapy system for stroke rehabilitation

    Get PDF
    We present a virtual reality (VR)-based motor neurorehabilitation system for stroke patients with upper limb paresis. It is based on two hypotheses: (1) observed actions correlated with self-generated or intended actions engage cortical motor observation, planning and execution areas ("mirror neurons”); (2) activation in damaged parts of motor cortex can be enhanced by viewing mirrored movements of non-paretic limbs. We postulate that our approach, applied during the acute post-stroke phase, facilitates motor re-learning and improves functional recovery. The patient controls a first-person view of virtual arms in tasks varying from simple (hitting objects) to complex (grasping and moving objects). The therapist adjusts weighting factors in the non-paretic limb to move the paretic virtual limb, thereby stimulating the mirror neuron system and optimizing patient motivation through graded task success. We present the system's neuroscientific background, technical details and preliminary result

    Interactive visuo-motor therapy system for stroke rehabilitation

    Get PDF
    We present a virtual reality (VR)-based motor neurorehabilitation system for stroke patients with upper limb paresis. It is based on two hypotheses: (1) observed actions correlated with self-generated or intended actions engage cortical motor observation, planning and execution areas ("mirror neurons"); (2) activation in damaged parts of motor cortex can be enhanced by viewing mirrored movements of non-paretic limbs. We postulate that our approach, applied during the acute post-stroke phase, facilitates motor re-learning and improves functional recovery. The patient controls a first-person view of virtual arms in tasks varying from simple (hitting objects) to complex (grasping and moving objects). The therapist adjusts weighting factors in the non-paretic limb to move the paretic virtual limb, thereby stimulating the mirror neuron system and optimizing patient motivation through graded task success. We present the system's neuroscientific background, technical details and preliminary results.info:eu-repo/semantics/publishedVersio

    Observing Virtual Arms that You Imagine Are Yours Increases the Galvanic Skin Response to an Unexpected Threat

    Get PDF
    Multi-modal visuo-tactile stimulation of the type performed in the rubber hand illusion can induce the brain to temporarily incorporate external objects into the body image. In this study we show that audio-visual stimulation combined with mental imagery more rapidly elicits an elevated physiological response (skin conductance) after an unexpected threat to a virtual limb, compared to audio-visual stimulation alone. Two groups of subjects seated in front of a monitor watched a first-person perspective view of slow movements of two virtual arms intercepting virtual balls rolling towards the viewer. One group was instructed to simply observe the movements of the two virtual arms, while the other group was instructed to observe the virtual arms and imagine that the arms were their own. After 84 seconds the right virtual arm was unexpectedly “stabbed” by a knife and began “bleeding”. This aversive stimulus caused both groups to show a significant increase in skin conductance. In addition, the observation-with-imagery group showed a significantly higher skin conductance (p<0.05) than the observation-only group over a 2-second period shortly after the aversive stimulus onset. No corresponding change was found in subjects' heart rates. Our results suggest that simple visual input combined with mental imagery may induce the brain to measurably temporarily incorporate external objects into its body image

    Testing the potential of a virtual reality neurorehabilitation system during performance of observation, imagery and imitation of motor actions recorded by wireless functional near-infrared spectroscopy (fNIRS)

    Get PDF
    Background Several neurorehabilitation strategies have been introduced over the last decade based on the so-called simulation hypothesis. This hypothesis states that a neural network located in primary and secondary motor areas is activated not only during overt motor execution, but also during observation or imagery of the same motor action. Based on this hypothesis, we investigated the combination of a virtual reality (VR) based neurorehabilitation system together with a wireless functional near infrared spectroscopy (fNIRS) instrument. This combination is particularly appealing from a rehabilitation perspective as it may allow minimally constrained monitoring during neurorehabilitative training. Methods fNIRS was applied over F3 of healthy subjects during task performance in a virtual reality (VR) environment: 1) 'unilateral' group (N = 15), contralateral recording during observation, motor imagery, observation & motor imagery, and imitation of a grasping task performed by a virtual limb (first-person perspective view) using the right hand; 2) 'bilateral' group (N = 8), bilateral recording during observation and imitation of the same task using the right and left hand alternately. Results In the unilateral group, significant within-condition oxy-hemoglobin concentration Δ[O2Hb] changes (mean ± SD ÎŒmol/l) were found for motor imagery (0.0868 ± 0.5201 ÎŒmol/l) and imitation (0.1715 ± 0.4567 ÎŒmol/l). In addition, the bilateral group showed a significant within-condition Δ[O2Hb] change for observation (0.0924 ± 0.3369 ÎŒmol/l) as well as between-conditions with lower Δ[O2Hb] amplitudes during observation compared to imitation, especially in the ipsilateral hemisphere (p < 0.001). Further, in the bilateral group, imitation using the non-dominant (left) hand resulted in larger Δ[O2Hb] changes in both the ipsi- and contralateral hemispheres as compared to using the dominant (right) hand. Conclusions This study shows that our combined VR-fNIRS based neurorehabilitation system can activate the action-observation system as described by the simulation hypothesis during performance of observation, motor imagery and imitation of hand actions elicited by a VR environment. Further, in accordance with previous studies, the findings of this study revealed that both inter-subject variability and handedness need to be taken into account when recording in untrained subjects. These findings are of relevance for demonstrating the potential of the VR-fNIRS instrument in neurofeedback applications

    Designing neuromorphic interactive spaces

    No full text

    A Miniature, One-Handed 3D Motion Controller

    No full text
    Abstract. Users of three-dimensional computer-aided design (CAD) and gaming applications need to manipulate virtual objects in up to six degrees of rotational and translation freedom (DOF). To date, no 3D controller provides one-handed 6DOF input with miniature size and low cost. This paper presents a prototype of the first one-handed 6DOF motion controller suitable for use in portable platforms such as laptop computers, mobile telephones and hand-held game consoles. It is based on an optical sensor combined with novel planar spring mechanics, and can be easily manufactured using low-cost materials and processes

    Getting to know your neighbors: Unsupervised learning of topography from real-world, event-based input

    Full text link
    Biological neural systems must grow their own connections and maintain topological relations between elements that are related to the sensory input surface. Artificial systems have traditionally prewired such maps, but the sensor arrangement is not always known and can be expensive to specify before run time. Here we present a method for learning and updating topographic maps in systems comprising modular, event-based elements. Using an unsupervised neural spike-timing-based learning rule combined with Hebbian learning, our algorithm uses the spatiotemporal coherence of the external world to train its network. It improves on existing algorithms by not assuming a known topography of the target map and includes a novel method for automatically detecting edge elements. We show how, for stimuli that are small relative to the sensor resolution, the temporal learning window parameters can be determined without using any user-specified constants. For stimuli that are larger relative to the sensor resolution, we provide a parameter extraction method that generally outperforms the small-stimulus method but requires one user-specified constant. The algorithm was tested on real data from a 64 × 64-pixel section of an event-based temporal contrast silicon retina and a 360-tile tactile luminous floor. It learned 95.8% of the correct neighborhood relations for the silicon retina within about 400 seconds of real-world input from a driving scene and 98.1% correct for the sensory floor after about 160 minutes of human pedestrian traffic. Residual errors occurred in regions receiving little or ambiguous input, and the learned topological representations were able to update automatically in response to simulated damage. Our algorithm has applications in the design of modular autonomous systems in which the interfaces between components are learned during operation rather than at design time

    Brain Activation During Visually Guided Finger Movements

    Get PDF
    Computer interaction via visually guided hand movements often employs either abstract cursor-based feedback or virtual hand (VH) representations of varying degrees of realism. The effect of changing this visual feedback in virtual reality settings is currently unknown. In this study, 19 healthy right-handed adults performed index finger movements ("action") and observed movements ("observation") with four different types of visual feedback: a simple circular cursor (CU), a point light (PL) pattern indicating finger joint positions, a shadow cartoon hand (SH) and a realistic VH. Finger movements were recorded using a data glove, and eye-tracking was recorded optically. We measured brain activity using functional magnetic resonance imaging (fMRI). Both action and observation conditions showed stronger fMRI signal responses in the occipitotemporal cortex compared to baseline. The action conditions additionally elicited elevated bilateral activations in motor, somatosensory, parietal, and cerebellar regions. For both conditions, feedback of a hand with a moving finger (SH, VH) led to higher activations than CU or PL feedback, specifically in early visual regions and the occipitotemporal cortex. Our results show the stronger recruitment of a network of cortical regions during visually guided finger movements with human hand feedback when compared to a visually incomplete hand and abstract feedback. This information could have implications for the design of visually guided tasks involving human body parts in both research and application or training-related paradigms

    Virtual hand feedback reduces reaction time in an interactive finger reaching task

    Get PDF
    Computer interaction via visually guided hand or finger movements is a ubiquitous part of daily computer usage in work or gaming. Surprisingly, however, little is known about the performance effects of using virtual limb representations versus simpler cursors. In this study 26 healthy right-handed adults performed cued index finger flexion-extension movements towards an on-screen target while wearing a data glove. They received each of four different types of real-time visual feedback: a simple circular cursor, a point light pattern indicating finger joint positions, a cartoon hand and a fully shaded virtual hand. We found that participants initiated the movements faster when receiving feedback in the form of a hand than when receiving circular cursor or point light feedback. This overall difference was robust for three out of four hand versus circle pairwise comparisons. The faster movement initiation for hand feedback was accompanied by a larger movement amplitude and a larger movement error. We suggest that the observed effect may be related to priming of hand information during action perception and execution affecting motor planning and execution. The results may have applications in the use of body representations in virtual reality applications
    • 

    corecore