120 research outputs found

    Review of control strategies for robotic movement training after neurologic injury

    Get PDF
    There is increasing interest in using robotic devices to assist in movement training following neurologic injuries such as stroke and spinal cord injury. This paper reviews control strategies for robotic therapy devices. Several categories of strategies have been proposed, including, assistive, challenge-based, haptic simulation, and coaching. The greatest amount of work has been done on developing assistive strategies, and thus the majority of this review summarizes techniques for implementing assistive strategies, including impedance-, counterbalance-, and EMG- based controllers, as well as adaptive controllers that modify control parameters based on ongoing participant performance. Clinical evidence regarding the relative effectiveness of different types of robotic therapy controllers is limited, but there is initial evidence that some control strategies are more effective than others. It is also now apparent there may be mechanisms by which some robotic control approaches might actually decrease the recovery possible with comparable, non-robotic forms of training. In future research, there is a need for head-to-head comparison of control algorithms in randomized, controlled clinical trials, and for improved models of human motor recovery to provide a more rational framework for designing robotic therapy control strategies

    Tactile Weight Rendering: A Review for Researchers and Developers

    Full text link
    Haptic rendering of weight plays an essential role in naturalistic object interaction in virtual environments. While kinesthetic devices have traditionally been used for this aim by applying forces on the limbs, tactile interfaces acting on the skin have recently offered potential solutions to enhance or substitute kinesthetic ones. Here, we aim to provide an in-depth overview and comparison of existing tactile weight rendering approaches. We categorized these approaches based on their type of stimulation into asymmetric vibration and skin stretch, further divided according to the working mechanism of the devices. Then, we compared these approaches using various criteria, including physical, mechanical, and perceptual characteristics of the reported devices and their potential applications. We found that asymmetric vibration devices have the smallest form factor, while skin stretch devices relying on the motion of flat surfaces, belts, or tactors present numerous mechanical and perceptual advantages for scenarios requiring more accurate weight rendering. Finally, we discussed the selection of the proposed categorization of devices and their application scopes, together with the limitations and opportunities for future research. We hope this study guides the development and use of tactile interfaces to achieve a more naturalistic object interaction and manipulation in virtual environments.Comment: 15 pages, 2 tables, 3 figures, surve

    The effect of haptic guidance, aging, and initial skill level on motor learning of a steering task

    Get PDF
    In a previous study, we found that haptic guidance from a robotic steering wheel can improve short-term learning of steering of a simulated vehicle, in contrast to several studies of other tasks that had found that the guidance either impairs or does not aid motor learning. In this study, we examined whether haptic guidance-as-needed can improve long-term retention (across 1 week) of the steering task, with age and initial skill level as independent variables. Training with guidance-as-needed allowed all participants to learn to steer without experiencing large errors. For young participants (age 18–30), training with guidance-as-needed produced better long-term retention of driving skill than did training without guidance. For older participants (age 65–92), training with guidance-as-needed improved long-term retention in tracking error, but not significantly. However, for a subset of less skilled, older subjects, training with guidance-as-needed significantly improved long-term retention. The benefits of guidance-based training were most evident as an improved ability to straighten the vehicle direction when coming out of turns. In general, older participants not only systematically performed worse at the task than younger subjects (errors ∌3 times greater), but also apparently learned more slowly, forgetting a greater percentage of the learned task during the 1 week layoffs between the experimental sessions. This study demonstrates that training with haptic guidance can benefit long-term retention of a driving skill for young and for some old drivers. Training with haptic guidance is more useful for people with less initial skill

    Enhancing touch sensibility by sensory retraining in a sensory discrimination task via haptic rendering

    Get PDF
    Stroke survivors are commonly affected by somatosensory impairment, hampering their ability to interpret somatosensory information. Somatosensory information has been shown to critically support movement execution in healthy individuals and stroke survivors. Despite the detrimental effect of somatosensory impairments on performing activities of daily living, somatosensory training—in stark contrast to motor training—does not represent standard care in neurorehabilitation. Reasons for the neglected somatosensory treatment are the lack of high-quality research demonstrating the benefits of somatosensory interventions on stroke recovery, the unavailability of reliable quantitative assessments of sensorimotor deficits, and the labor-intensive nature of somatosensory training that relies on therapists guiding the hands of patients with motor impairments. To address this clinical need, we developed a virtual reality-based robotic texture discrimination task to assess and train touch sensibility. Our system incorporates the possibility to robotically guide the participants' hands during texture exploration (i.e., passive touch) and no-guided free texture exploration (i.e., active touch). We ran a 3-day experiment with thirty-six healthy participants who were asked to discriminate the odd texture among three visually identical textures –haptically rendered with the robotic device– following the method of constant stimuli. All participants trained with the passive and active conditions in randomized order on different days. We investigated the reliability of our system using the Intraclass Correlation Coefficient (ICC). We also evaluated the enhancement of participants' touch sensibility via somatosensory retraining and compared whether this enhancement differed between training with active vs. passive conditions. Our results showed that participants significantly improved their task performance after training. Moreover, we found that training effects were not significantly different between active and passive conditions, yet, passive exploration seemed to increase participants' perceived competence. The reliability of our system ranged from poor (in active condition) to moderate and good (in passive condition), probably due to the dependence of the ICC on the between-subject variability, which in a healthy population is usually small. Together, our virtual reality-based robotic haptic system may be a key asset for evaluating and retraining sensory loss with minimal supervision, especially for brain-injured patients who require guidance to move their hands

    Enhancing touch sensibility by sensory retraining in a sensory discrimination task via haptic rendering

    Get PDF
    Stroke survivors are commonly affected by somatosensory impairment, hampering their ability to interpret somatosensory information. Somatosensory information has been shown to critically support movement execution in healthy individuals and stroke survivors. Despite the detrimental effect of somatosensory impairments on performing activities of daily living, somatosensory training—in stark contrast to motor training—does not represent standard care in neurorehabilitation. Reasons for the neglected somatosensory treatment are the lack of high-quality research demonstrating the benefits of somatosensory interventions on stroke recovery, the unavailability of reliable quantitative assessments of sensorimotor deficits, and the labor-intensive nature of somatosensory training that relies on therapists guiding the hands of patients with motor impairments. To address this clinical need, we developed a virtual reality-based robotic texture discrimination task to assess and train touch sensibility. Our system incorporates the possibility to robotically guide the participants' hands during texture exploration (i.e., passive touch) and no-guided free texture exploration (i.e., active touch). We ran a 3-day experiment with thirty-six healthy participants who were asked to discriminate the odd texture among three visually identical textures –haptically rendered with the robotic device– following the method of constant stimuli. All participants trained with the passive and active conditions in randomized order on different days. We investigated the reliability of our system using the Intraclass Correlation Coefficient (ICC). We also evaluated the enhancement of participants' touch sensibility via somatosensory retraining and compared whether this enhancement differed between training with active vs. passive conditions. Our results showed that participants significantly improved their task performance after training. Moreover, we found that training effects were not significantly different between active and passive conditions, yet, passive exploration seemed to increase participants' perceived competence. The reliability of our system ranged from poor (in active condition) to moderate and good (in passive condition), probably due to the dependence of the ICC on the between-subject variability, which in a healthy population is usually small. Together, our virtual reality-based robotic haptic system may be a key asset for evaluating and retraining sensory loss with minimal supervision, especially for brain-injured patients who require guidance to move their hands

    Enhancing stroke rehabilitation with whole-hand haptic rendering:development and clinical usability evaluation of a novel upper-limb rehabilitation device

    Get PDF
    Introduction: There is currently a lack of easy-to-use and effective robotic devices for upper-limb rehabilitation after stroke. Importantly, most current systems lack the provision of somatosensory information that is congruent with the virtual training task. This paper introduces a novel haptic robotic system designed for upper-limb rehabilitation, focusing on enhancing sensorimotor rehabilitation through comprehensive haptic rendering. Methods: We developed a novel haptic rehabilitation device with a unique combination of degrees of freedom that allows the virtual training of functional reach and grasp tasks, where we use a physics engine-based haptic rendering method to render whole-hand interactions between the patients’ hands and virtual tangible objects. To evaluate the feasibility of our system, we performed a clinical mixed-method usability study with seven patients and seven therapists working in neurorehabilitation. We employed standardized questionnaires to gather quantitative data and performed semi-structured interviews with all participants to gain qualitative insights into the perceived usability and usefulness of our technological solution. Results: The device demonstrated ease of use and adaptability to various hand sizes without extensive setup. Therapists and patients reported high satisfaction levels, with the system facilitating engaging and meaningful rehabilitation exercises. Participants provided notably positive feedback, particularly emphasizing the system’s available degrees of freedom and its haptic rendering capabilities. Therapists expressed confidence in the transferability of sensorimotor skills learned with our system to activities of daily living, although further investigation is needed to confirm this. Conclusion: The novel haptic robotic system effectively supports upper-limb rehabilitation post-stroke, offering high-fidelity haptic feedback and engaging training tasks. Its clinical usability, combined with positive feedback from both therapists and patients, underscores its potential to enhance robotic neurorehabilitation.</p

    Enhancing touch sensibility with sensory electrical stimulation and sensory retraining

    Get PDF
    A large proportion of stroke survivors suffer from sensory loss, negatively impacting their independence, quality of life, and neurorehabilitation prognosis. Despite the high prevalence of somatosensory impairments, our understanding of somatosensory interventions such as sensory electrical stimulation (SES) in neurorehabilitation is limited. We aimed to study the effectiveness of SES combined with a sensory discrimination task in a well-controlled virtual environment in healthy participants, setting a foundation for its potential application in stroke rehabilitation. We employed electroencephalography (EEG) to gain a better understanding of the underlying neural mechanisms and dynamics associated with sensory training and SES. We conducted a single-session experiment with 26 healthy participants who explored a set of three visually identical virtual textures—haptically rendered by a robotic device and that differed in their spatial period—while physically guided by the robot to identify the odd texture. The experiment consisted of three phases: pre-intervention, intervention, and post-intervention. Half the participants received subthreshold whole-hand SES during the intervention, while the other half received sham stimulation. We evaluated changes in task performance—assessed by the probability of correct responses—before and after intervention and between groups. We also evaluated differences in the exploration behavior, e.g., scanning speed. EEG was employed to examine the effects of the intervention on brain activity, particularly in the alpha frequency band (8–13 Hz) associated with sensory processing. We found that participants in the SES group improved their task performance after intervention and their scanning speed during and after intervention, while the sham group did not improve their task performance. However, the differences in task performance improvements between groups only approached significance. Furthermore, we found that alpha power was sensitive to the effects of SES; participants in the stimulation group exhibited enhanced brain signals associated with improved touch sensitivity likely due to the effects of SES on the central nervous system, while the increase in alpha power for the sham group was less pronounced. Our findings suggest that SES enhances texture discrimination after training and has a positive effect on sensory-related brain areas. Further research involving brain-injured patients is needed to confirm the potential benefit of our solution in neurorehabilitation.</p

    Enhancing touch sensibility with sensory electrical stimulation and sensory retraining

    Get PDF
    A large proportion of stroke survivors suffer from sensory loss, negatively impacting their independence, quality of life, and neurorehabilitation prognosis. Despite the high prevalence of somatosensory impairments, our understanding of somatosensory interventions such as sensory electrical stimulation (SES) in neurorehabilitation is limited. We aimed to study the effectiveness of SES combined with a sensory discrimination task in a well-controlled virtual environment in healthy participants, setting a foundation for its potential application in stroke rehabilitation. We employed electroencephalography (EEG) to gain a better understanding of the underlying neural mechanisms and dynamics associated with sensory training and SES. We conducted a single-session experiment with 26 healthy participants who explored a set of three visually identical virtual textures—haptically rendered by a robotic device and that differed in their spatial period—while physically guided by the robot to identify the odd texture. The experiment consisted of three phases: pre-intervention, intervention, and post-intervention. Half the participants received subthreshold whole-hand SES during the intervention, while the other half received sham stimulation. We evaluated changes in task performance—assessed by the probability of correct responses—before and after intervention and between groups. We also evaluated differences in the exploration behavior, e.g., scanning speed. EEG was employed to examine the effects of the intervention on brain activity, particularly in the alpha frequency band (8–13 Hz) associated with sensory processing. We found that participants in the SES group improved their task performance after intervention and their scanning speed during and after intervention, while the sham group did not improve their task performance. However, the differences in task performance improvements between groups only approached significance. Furthermore, we found that alpha power was sensitive to the effects of SES; participants in the stimulation group exhibited enhanced brain signals associated with improved touch sensitivity likely due to the effects of SES on the central nervous system, while the increase in alpha power for the sham group was less pronounced. Our findings suggest that SES enhances texture discrimination after training and has a positive effect on sensory-related brain areas. Further research involving brain-injured patients is needed to confirm the potential benefit of our solution in neurorehabilitation.</p

    Evaluating tactile feedback in addition to kinesthetic feedback for haptic shape rendering: a pilot study

    Get PDF
    In current virtual reality settings for motor skill training, only visual information is usually provided regarding the virtual objects the trainee interacts with. However, information gathered through cutaneous (tactile feedback) and muscle mechanoreceptors (kinesthetic feedback) regarding, e.g., object shape, is crucial to successfully interact with those objects. To provide this essential information, previous haptic interfaces have targeted to render either tactile or kinesthetic feedback while the effectiveness of multimodal tactile and kinesthetic feedback on the perception of the characteristics of virtual objects still remains largely unexplored. Here, we present the results from an experiment we conducted with sixteen participants to evaluate the effectiveness of multimodal tactile and kinesthetic feedback on shape perception. Using a within-subject design, participants were asked to reproduce virtual shapes after exploring them without visual feedback and with either congruent tactile and kinesthetic feedback or with only kinesthetic feedback. Tactile feedback was provided with a cable-driven platform mounted on the fingertip, while kinesthetic feedback was provided using a haptic glove. To measure the participants’ ability to perceive and reproduce the rendered shapes, we measured the time participants spent exploring and reproducing the shapes and the error between the rendered and reproduced shapes after exploration. Furthermore, we assessed the participants’ workload and motivation using well-established questionnaires. We found that concurrent tactile and kinesthetic feedback during shape exploration resulted in lower reproduction errors and longer reproduction times. The longer reproduction times for the combined condition may indicate that participants could learn the shapes better and, thus, were more careful when reproducing them. We did not find differences between conditions in the time spent exploring the shapes or the participants’ workload and motivation. The lack of differences in workload between conditions could be attributed to the reported minimal-to-intermediate workload levels, suggesting that there was little room to further reduce the workload. Our work highlights the potential advantages of multimodal congruent tactile and kinesthetic feedback when interacting with tangible virtual objects with applications in virtual simulators for hands-on training applications.</p

    Evaluating tactile feedback in addition to kinesthetic feedback for haptic shape rendering: a pilot study

    Get PDF
    In current virtual reality settings for motor skill training, only visual information is usually provided regarding the virtual objects the trainee interacts with. However, information gathered through cutaneous (tactile feedback) and muscle mechanoreceptors (kinesthetic feedback) regarding, e.g., object shape, is crucial to successfully interact with those objects. To provide this essential information, previous haptic interfaces have targeted to render either tactile or kinesthetic feedback while the effectiveness of multimodal tactile and kinesthetic feedback on the perception of the characteristics of virtual objects still remains largely unexplored. Here, we present the results from an experiment we conducted with sixteen participants to evaluate the effectiveness of multimodal tactile and kinesthetic feedback on shape perception. Using a within-subject design, participants were asked to reproduce virtual shapes after exploring them without visual feedback and with either congruent tactile and kinesthetic feedback or with only kinesthetic feedback. Tactile feedback was provided with a cable-driven platform mounted on the fingertip, while kinesthetic feedback was provided using a haptic glove. To measure the participants’ ability to perceive and reproduce the rendered shapes, we measured the time participants spent exploring and reproducing the shapes and the error between the rendered and reproduced shapes after exploration. Furthermore, we assessed the participants’ workload and motivation using well-established questionnaires. We found that concurrent tactile and kinesthetic feedback during shape exploration resulted in lower reproduction errors and longer reproduction times. The longer reproduction times for the combined condition may indicate that participants could learn the shapes better and, thus, were more careful when reproducing them. We did not find differences between conditions in the time spent exploring the shapes or the participants’ workload and motivation. The lack of differences in workload between conditions could be attributed to the reported minimal-to-intermediate workload levels, suggesting that there was little room to further reduce the workload. Our work highlights the potential advantages of multimodal congruent tactile and kinesthetic feedback when interacting with tangible virtual objects with applications in virtual simulators for hands-on training applications.</p
    • 

    corecore