7,396 research outputs found

    A multilevel model for movement rehabilitation in Traumatic Brain Injury (TBI) using virtual environments

    Get PDF
    This paper presents a conceptual model for movement rehabilitation of traumatic brain injury (TBI) using virtual environments. This hybrid model integrates principles from ecological systems theory with recent advances in cognitive neuroscience, and supports a multilevel approach to both assessment and treatment. Performance outcomes at any stage of recovery are determined by the interplay of task, individual, and environmental/contextual factors. We argue that any system of rehabilitation should provide enough flexibility for task and context factors to be varied systematically, based on the current neuromotor and biomechanical capabilities of the performer or patient. Thus, in order to understand how treatment modalities are to be designed and implemented, there is a need to understand the function of brain systems that support learning at a given stage of recovery, and the inherent plasticity of the system. We know that virtual reality (VR) systems allow training environments to be presented in a highly automated, reliable, and scalable way. Presentation of these virtual environments (VEs) should permit movement analysis at three fundamental levels of behaviour: (i) neurocognitive bases of performance (we focus in particular on the development and use of internal models for action which support adaptive, on-line control); (ii) movement forms and patterns that describe the patients' movement signature at a given stage of recovery (i.e, kinetic and kinematic markers of movement proficiency), (iii) functional outcomes of the movement. Each level of analysis can also map quite seamlessly to different modes of treatment. At the neurocognitive level, for example, semi-immersive VEs can help retrain internal modeling processes by reinforcing the patients' sense of multimodal space (via augmented feedback), their position within it, and the ability to predict and control actions flexibly (via movement simulation and imagery training). More specifically, we derive four - key therapeutic environment concepts (or Elements) presented using VR technologies: Embodiment (simulation and imagery), Spatial Sense (augmenting position sense), Procedural (automaticity and dual-task control), and Participatory (self-initiated action). The use of tangible media/objects, force transduction, and vision-based tracking systems for the augmentation of gestures and physical presence will be discussed in this context

    Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review

    Get PDF
    It is generally accepted that augmented feedback, provided by a human expert or a technical display, effectively enhances motor learning. However, discussion of the way to most effectively provide augmented feedback has been controversial. Related studies have focused primarily on simple or artificial tasks enhanced by visual feedback. Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback. The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories. The review addresses the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities and the challenges that need to be overcome to provide appropriate feedback in these modalities, either in isolation or in combination. Accordingly, the design criteria for successful visual, auditory, haptic, and multimodal feedback are elaborate

    Learning force patterns with a multimodal system using contextual cues

    Get PDF
    Previous studies on learning force patterns (fine motor skills) have focused on providing “punctual information”, which means users only receive information about their performance at the current time step. This work proposes a new approach based on “contextual information”, in which users receive information not only about the current time step, but also about the past (how the target force has changed over time) and the future (how the target force will change). A test was run to compare the performance of the contextual approach in relation to the punctual information, in which each participant had to memorize and then reproduce a pattern of force after training with a multimodal system. The findings suggest that the contextual approach is a useful strategy for force pattern learning. The advantage of the contextual information approach over the punctual information approach is that users receive information about the evolution of their performance (helping to correct the errors), and they also receive information about the next forces to be exerted (providing them with a better understanding of the target force profile). Finally, the contextual approach could be implemented in medical training platforms or surgical robots to extend the capabilities of these systems
    • …
    corecore