132 research outputs found

    Optimizing User Integration for Individualized Rehabilitation

    Get PDF
    User integration with assistive devices or rehabilitation protocols to improve movement function is a key principle to consider for developers to truly optimize performance gains. Better integration may entail customizing operation of devices and training programs according to several user characteristics during execution of functional tasks. These characteristics may be physical dimensions, residual capabilities, restored sensory feedback, cognitive perception, or stereotypical actions

    Haptic Interaction with a Guide Robot in Zero Visibility

    Get PDF
    Search and rescue operations are often undertaken in dark and noisy environment in which rescue team must rely on haptic feedback for exploration and safe exit. However, little attention has been paid specifically to haptic sensitivity in such contexts or the possibility of enhancing communicational proficiency in the haptic mode as a life-preserving measure. The potential of root swarms for search and rescue has been shown by the Guardians project (EU, 2006-2010); however the project also showed the problem of human robot interaction in smoky (non-visibility) and noisy conditions. The REINS project (UK, 2011-2015) focused on human robot interaction in such conditions. This research is a body of work (done as a part of he REINS project) which investigates the haptic interaction of a person wit a guide robot in zero visibility. The thesis firstly reflects upon real world scenarios where people make use of the haptic sense to interact in zero visibility (such as interaction among firefighters and symbiotic relationship between visually impaired people and guide dogs). In addition, it reflects on the sensitivity and trainability of the haptic sense, to be used for the interaction. The thesis presents an analysis and evaluation of the design of a physical interface (Designed by the consortium of the REINS project) connecting the human and the robotic guide in poor visibility conditions. Finally, it lays a foundation for the design of test cases to evaluate human robot haptic interaction, taking into consideration the two aspects of the interaction, namely locomotion guidance and environmental exploration

    Effects of sensory cueing in virtual motor rehabilitation. A review.

    Get PDF
    Objectives To critically identify studies that evaluate the effects of cueing in virtual motor rehabilitation in patients having different neurological disorders and to make recommendations for future studies. Methods Data from MEDLINEÂź, IEEExplore, Science Direct, Cochrane library and Web of Science was searched until February 2015. We included studies that investigate the effects of cueing in virtual motor rehabilitation related to interventions for upper or lower extremities using auditory, visual, and tactile cues on motor performance in non-immersive, semi-immersive, or fully immersive virtual environments. These studies compared virtual cueing with an alternative or no intervention. Results Ten studies with a total number of 153 patients were included in the review. All of them refer to the impact of cueing in virtual motor rehabilitation, regardless of the pathological condition. After selecting the articles, the following variables were extracted: year of publication, sample size, study design, type of cueing, intervention procedures, outcome measures, and main findings. The outcome evaluation was done at baseline and end of the treatment in most of the studies. All of studies except one showed improvements in some or all outcomes after intervention, or, in some cases, in favor of the virtual rehabilitation group compared to the control group. Conclusions Virtual cueing seems to be a promising approach to improve motor learning, providing a channel for non-pharmacological therapeutic intervention in different neurological disorders. However, further studies using larger and more homogeneous groups of patients are required to confirm these findings

    Data analytics for image visual complexity and kinect-based videos of rehabilitation exercises

    Full text link
    With the recent advances in computer vision and pattern recognition, methods from these fields are successfully applied to solve problems in various domains, including health care and social sciences. In this thesis, two such problems, from different domains, are discussed. First, an application of computer vision and broader pattern recognition in physical therapy is presented. Home-based physical therapy is an essential part of the recovery process in which the patient is prescribed specific exercises in order to improve symptoms and daily functioning of the body. However, poor adherence to the prescribed exercises is a common problem. In our work, we explore methods for improving home-based physical therapy experience. We begin by proposing DyAd, a dynamically difficulty adjustment system which captures the trajectory of the hand movement, evaluates the user's performance quantitatively and adjusts the difficulty level for the next trial of the exercise based on the performance measurements. Next, we introduce ExerciseCheck, a remote monitoring and evaluation platform for home-based physical therapy. ExerciseCheck is capable of capturing exercise information, evaluating the performance, providing therapeutic feedback to the patient and the therapist, checking the progress of the user over the course of the physical therapy, and supporting the patient throughout this period. In our experiments, Parkinson patients have tested our system at a clinic and in their homes during their physical therapy period. Our results suggests that ExerciseCheck is a user-friendly application and can assist patients by providing motivation, and guidance to ensure correct execution of the required exercises. As the second application, and within computer vision paradigm, we focus on visual complexity, an image attribute that humans can subjectively evaluate based on the level of details in the image. Visual complexity has been studied in psychophysics, cognitive science, and, more recently, computer vision, for the purposes of product design, web design, advertising, etc. We first introduce a diverse visual complexity dataset which compromises of seven image categories. We collect the ground-truth scores by comparing the pairwise relationship of images and then convert the pairwise scores to absolute scores using mathematical methods. Furthermore, we propose a method to measure the visual complexity that uses unsupervised information extraction from intermediate convolutional layers of deep neural networks. We derive an activation energy metric that combines convolutional layer activations to quantify visual complexity. The high correlations between ground-truth labels and computed energy scores in our experiments show superiority of our method compared to the previous works. Finally, as an example of the relationship between visual complexity and other image attributes, we demonstrate that, within the context of a category, visually more complex images are more memorable to human observers

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    ROBOT-ASSISTED PEDIATRIC REHABILITATION OF UPPER LIMB FUNCTIONS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    EEG-based brain-computer interface with visual and haptic feedback

    Get PDF
    Tehokas koehenkilöiden oppiminen palautteen avulla on tÀrkeÀÀ aivokÀyttöliittymÀ tutkimuksessa. Suurimmassa osassa aiemmista tutkimuksista koehenkilöt ovat saaneet palautteen visuaalisena; toiset palautemodaliteetit voisivat paremmin palvella potilaita, joilla on nÀkövammoja ja kÀyttÀjiÀ, jotka tarvitsevat nÀkökykyÀ muualla. Aiemmissa tutkimuksissa auditiivinen palaute oli merkittÀvÀsti huonompi koehenkilöiden opetuksessa kuin visuaalinen palaute. Haptinen (tunto) palaute voisi sopia paremmin aivokÀyttöliittymille. Kuusi liikuntakykyistÀ, ensikertalaista koehenkilöÀ saivat haptista tai visuaalista palautetta tai molempia erillisissÀ sessioissa opetellessaan kaksiluokkaisen aivokÀyttöliittymÀn hallintaa vasemman ja oikean kÀden kuvittelulla. Kokeita varten toteutettu TKK BCI komponentteineen kykenee reaaliaikaiseen signaalin mittaukseen, signaalien kÀsittelyyn, palautteen antamiseen ja sovellusten ohjaamiseen. Palautetta annettiin kerran sekunnissa joko nÀytöllÀ tai haptisilla elementeillÀ, jotka kiinnitettiin koehenkilön kaulan alaosaan. Koehenkilöt saavuttivat keskimÀÀrin 67 % luokittelutuloksia haptisella palautteella ja 68 % visuaalisella palautteella. Yksi koehenkilö saavutti jopa 88.8 % luokittelutuloksen yhdessÀ sessiossa. Piirrevalinnalla löydetyt vakaat sensorimotoriset rytmit taajuuksien 8-12 Hz ja 18-26 Hz vÀlissÀ tuottivat parhaimmat tulokset. Haptinen stimulaatio aiheutti vain vÀhÀn nÀkyvÀÀ hÀiriötÀ taajuusalueella 8-30 Hz. Tulokset tÀstÀ tutkimuksessa nÀyttÀvÀt, ettei haptisen ja visuaalisen palautteen vÀlillÀ ole selkeÀÀ eroa koehenkilöiden oppimisessa. Suurin osa koehenkilöistÀ kokivat haptisen palautteen luonnolliseksi ja miellyttÀvÀksi. Haptinen palaute voi nÀistÀ seikoista johtuen korvata visuaalisen palautteen ja vapauttaa nÀkökyvyn muihin tehtÀviin. Tulosten vahvistamiseksi on tarpeellista tehdÀ jatkotutkimuksia liikuntakyvyttömillÀ oikeissa kotiympÀristöissÀ.Efficient training of subjects with feedback is essential to brain-computer interface (BCI) research. In most previous studies, subjects have mostly received visual feedback; other feedback modalities could, however, better serve patients with visual impairment and in tasks, which allocate visual attention. In previous studies auditory feedback was significantly worse than visual feedback during subject training. Haptic feedback (vibrotactile stimulation) could be better suited for brain-computer communication than auditory feedback. Six able-bodied subjects without previous BCI experience received haptic or visual feedback or both in separate sessions while learning to control a two-class BCI using imagery of left and right hand movements. A BCI system was designed and implemented for the experiments. The TKK BCI consists of components capable of real-time signal acquisition, signal processing, feedback, and control of applications. The feedback was presented once every second either on a screen or with haptic elements attached to the base of the subject's neck. The subjects achieved average classification accuracies of 67% with haptic and 68% visual feedback. One subject achieved as high as 88.8% accuracy in a single session. Stable features selected from sensorimotor rhythms within the 8-12 Hz and 18-26 Hz frequency bands provided the highest accuracies. Only minor interference using haptic stimulation was observed within the 8-30 Hz frequency band. The results indicate no clear differences between learning with haptic or visual feedback. Most subjects felt haptic feedback natural and comfortable. Haptic feedback could thus substitute for visual feedback, and render vision available for other concurrent tasks. Further studies especially with motor-disabled patients in real home environments will be necessary to confirm the results

    Development of a Wearable Mechatronic Elbow Brace for Postoperative Motion Rehabilitation

    Get PDF
    This thesis describes the development of a wearable mechatronic brace for upper limb rehabilitation that can be used at any stage of motion training after surgical reconstruction of brachial plexus nerves. The results of the mechanical design and the work completed towards finding the best torque transmission system are presented herein. As part of this mechatronic system, a customized control system was designed, tested and modified. The control strategy was improved by replacing a PID controller with a cascade controller. Although the experiments have shown that the proposed device can be successfully used for muscle training, further assessment of the device, with the help of data from the patients with brachial plexus injury (BPI), is required to improve the control strategy. Unique features of this device include the combination of adjustability and modularity, as well as the passive adjustment required to compensate for the carrying angle
    • 

    corecore