368 research outputs found

    Detecting affective states in virtual rehabilitation

    Get PDF
    Virtual rehabilitation supports motor training following stroke by means of tailored virtual environments. To optimize therapy outcome, virtual rehabilitation systems automatically adapt to the different patients' changing needs. Adaptation decisions should ideally be guided by both the observable performance and the hidden mind state of the user. We hypothesize that some affective aspects can be inferred from observable metrics. Here we present preliminary results of a classification exercise to decide on 4 states; tiredness, tension, pain and satisfaction. Descriptors of 3D hand movement and finger pressure were collected from 2 post-stroke participants while they practice on a virtual rehabilitation platform. Linear Support Vector Machine models were learnt to unfold a predictive relation between observation and the affective states considered. Initial results are promising (ROC Area under the curve (mean±std): 0.713 ± 0.137). Confirmation of these opens the door to incorporate surrogates of mind state into the algorithm deciding on therapy adaptation

    Multi-label and multimodal classifier for affectve states recognition in virtual rehabilitation

    Get PDF
    Computational systems that process multiple affective states may benefit from explicitly considering the interaction between the states to enhance their recognition performance. This work proposes the combination of a multi-label classifier, Circular Classifier Chain (CCC), with a multimodal classifier, Fusion using a Semi-Naive Bayesian classifier (FSNBC), to include explicitly the dependencies between multiple affective states during the automatic recognition process. This combination of classifiers is applied to a virtual rehabilitation context of post-stroke patients. We collected data from post-stroke patients, which include finger pressure, hand movements, and facial expressions during ten longitudinal sessions. Videos of the sessions were labelled by clinicians to recognize four states: tiredness, anxiety, pain, and engagement. Each state was modelled by the FSNBC receiving the information of finger pressure, hand movements, and facial expressions. The four FSNBCs were linked in the CCC to exploit the dependency relationships between the states. The convergence of CCC was reached by 5 iterations at most for all the patients. Results (ROC AUC) of CCC with the FSNBC are over 0.940 ± 0.045 (mean ± std. deviation) for the four states. Relationships of mutual exclusion between engagement and all the other states and co-occurrences between pain and anxiety were detected and discussed

    Unobtrusive inference of affective states in virtual rehabilitation from upper limb motions: a feasibility study

    Get PDF
    Virtual rehabilitation environments may afford greater patient personalization if they could harness the patient's affective state. Four states: anxiety, pain, engagement and tiredness (either physical or psychological), were hypothesized to be inferable from observable metrics of hand location and gripping strength -relevant for rehabilitation-. Contributions are; (a) multiresolution classifier built from Semi-NaĂŻve Bayesian classifiers, and (b) establishing predictive relations for the considered states from the motor proxies capitalizing on the proposed classifier with recognition levels sufficient for exploitation. 3D hand locations and gripping strength streams were recorded from 5 post-stroke patients whilst undergoing motor rehabilitation therapy administered through virtual rehabilitation along 10 sessions over 4 weeks. Features from the streams characterized the motor dynamics, while spontaneous manifestations of the states were labelled from concomitant videos by experts for supervised classification. The new classifier was compared against baseline support vector machine (SVM) and random forest (RF) with all three exhibiting comparable performances. Inference of the aforementioned states departing from chosen motor surrogates appears feasible, expediting increased personalization of virtual motor neurorehabilitation therapies

    Effect of the level of task abstraction on the transfer of knowledge from virtual environments in cognitive and motor tasks

    Get PDF
    Introduction: Virtual environments are increasingly being used for training. It is not fully understood what elements of virtual environments have the most impact and how the virtual training is integrated by the brain on the sought-after skill transference to the real environment. In virtual training, we analyzed how the task level of abstraction modulates the brain activity and the subsequent ability to execute it in the real environment and how this learning generalizes to other tasks. The training of a task under a low level of abstraction should lead to a higher transfer of skills in similar tasks, but the generalization of learning would be compromised, whereas a higher level of abstraction facilitates generalization of learning to different tasks but compromising specific effectiveness.// Methods: A total of 25 participants were trained and subsequently evaluated on a cognitive and a motor task following four training regimes, considering real vs. virtual training and low vs. high task abstraction. Performance scores, cognitive load, and electroencephalography signals were recorded. Transfer of knowledge was assessed by comparing performance scores in the virtual vs. real environment.// Results: The performance to transfer the trained skills showed higher scores in the same task under low abstraction, but the ability to generalize the trained skills was manifested by higher scores under high level of abstraction in agreement with our hypothesis. Spatiotemporal analysis of the electroencephalography revealed higher initial demands of brain resources which decreased as skills were acquired.// Discussion: Our results suggest that task abstraction during virtual training influences how skills are assimilated at the brain level and modulates its manifestation at the behavioral level. We expect this research to provide supporting evidence to improve the design of virtual training tasks./

    Collagen sequence analysis reveals evolutionary history of extinct West Indies Nesophontes (‘island shrews’)

    Get PDF
    Ancient biomolecule analyses are proving increasingly useful in the study of evolutionary patterns, including extinct organisms. Proteomic sequencing techniques complement genomic approaches, having the potential to examine lineages further back in time than achievable using ancient DNA, given the less stringent preservation requirements. In this study, we demonstrate the ability to use collagen sequence analyses via proteomics to provide species delimitation as a foundation for informing evolutionary patterns. We uncover biogeographic information of an enigmatic and recently extinct lineage of Nesophontes across their range on the Caribbean islands. First, evolutionary relationships reconstructed from collagen sequences reaffirm the affinity of Nesophontes and Solenodon as sister taxa within Solenodonota. This relationship helps lay the foundation for testing geographical isolation hypotheses across islands within the Greater Antilles, including movement from Cuba towards Hispaniola. Second, our results are consistent with Cuba having just two species of Nesophontes (N. micrus and N. major) that exhibit intrapopulation morphological variation. Finally, analysis of the recently described species from the Cayman Islands (N. hemicingulus) indicates that it is a closer relative to the Cuban species, N. major rather than N. micrus as previously speculated. Our proteomic sequencing improves our understanding of the origin, evolution, and distribution of this extinct mammal lineage, particularly with respect to approximate timing of speciation. Such knowledge is vital for this biodiversity hotspot, where the magnitude of recent extinctions may obscure true estimates of species richness in the past

    Collaborative Gaze Channelling for Improved Cooperation During Robotic Assisted Surgery

    Get PDF
    The use of multiple robots for performing complex tasks is becoming a common practice for many robot applications. When different operators are involved, effective cooperation with anticipated manoeuvres is important for seamless, synergistic control of all the end-effectors. In this paper, the concept of Collaborative Gaze Channelling (CGC) is presented for improved control of surgical robots for a shared task. Through eye tracking, the fixations of each operator are monitored and presented in a shared surgical workspace. CGC permits remote or physically separated collaborators to share their intention by visualising the eye gaze of their counterparts, and thus recovers, to a certain extent, the information of mutual intent that we rely upon in a vis-à-vis working setting. In this study, the efficiency of surgical manipulation with and without CGC for controlling a pair of bimanual surgical robots is evaluated by analysing the level of coordination of two independent operators. Fitts' law is used to compare the quality of movement with or without CGC. A total of 40 subjects have been recruited for this study and the results show that the proposed CGC framework exhibits significant improvement (p<0.05) on all the motion indices used for quality assessment. This study demonstrates that visual guidance is an implicit yet effective way of communication during collaborative tasks for robotic surgery. Detailed experimental validation results demonstrate the potential clinical value of the proposed CGC framework. © 2012 Biomedical Engineering Society.link_to_subscribed_fulltex
    • 

    corecore