55 research outputs found

    EEG classifier cross-task transfer to avoid training sessions in robot-assisted rehabilitation

    Full text link
    Background: For an individualized support of patients during rehabilitation, learning of individual machine learning models from the human electroencephalogram (EEG) is required. Our approach allows labeled training data to be recorded without the need for a specific training session. For this, the planned exoskeleton-assisted rehabilitation enables bilateral mirror therapy, in which movement intentions can be inferred from the activity of the unaffected arm. During this therapy, labeled EEG data can be collected to enable movement predictions of only the affected arm of a patient. Methods: A study was conducted with 8 healthy subjects and the performance of the classifier transfer approach was evaluated. Each subject performed 3 runs of 40 self-intended unilateral and bilateral reaching movements toward a target while EEG data was recorded from 64 channels. A support vector machine (SVM) classifier was trained under both movement conditions to make predictions for the same type of movement. Furthermore, the classifier was evaluated to predict unilateral movements by only beeing trained on the data of the bilateral movement condition. Results: The results show that the performance of the classifier trained on selected EEG channels evoked by bilateral movement intentions is not significantly reduced compared to a classifier trained directly on EEG data including unilateral movement intentions. Moreover, the results show that our approach also works with only 8 or even 4 channels. Conclusion: It was shown that the proposed classifier transfer approach enables motion prediction without explicit collection of training data. Since the approach can be applied even with a small number of EEG channels, this speaks for the feasibility of the approach in real therapy sessions with patients and motivates further investigations with stroke patients.Comment: 11 pages, 6 figures, 1 tabl

    Continuous ErrP detections during multimodal human-robot interaction

    Full text link
    Human-in-the-loop approaches are of great importance for robot applications. In the presented study, we implemented a multimodal human-robot interaction (HRI) scenario, in which a simulated robot communicates with its human partner through speech and gestures. The robot announces its intention verbally and selects the appropriate action using pointing gestures. The human partner, in turn, evaluates whether the robot's verbal announcement (intention) matches the action (pointing gesture) chosen by the robot. For cases where the verbal announcement of the robot does not match the corresponding action choice of the robot, we expect error-related potentials (ErrPs) in the human electroencephalogram (EEG). These intrinsic evaluations of robot actions by humans, evident in the EEG, were recorded in real time, continuously segmented online and classified asynchronously. For feature selection, we propose an approach that allows the combinations of forward and backward sliding windows to train a classifier. We achieved an average classification performance of 91% across 9 subjects. As expected, we also observed a relatively high variability between the subjects. In the future, the proposed feature selection approach will be extended to allow for customization of feature selection. To this end, the best combinations of forward and backward sliding windows will be automatically selected to account for inter-subject variability in classification performance. In addition, we plan to use the intrinsic human error evaluation evident in the error case by the ErrP in interactive reinforcement learning to improve multimodal human-robot interaction

    EEG and EMG dataset for the detection of errors introduced by an active orthosis device

    Full text link
    This paper presents a dataset containing recordings of the electroencephalogram (EEG) and the electromyogram (EMG) from eight subjects who were assisted in moving their right arm by an active orthosis device. The supported movements were elbow joint movements, i.e., flexion and extension of the right arm. While the orthosis was actively moving the subject's arm, some errors were deliberately introduced for a short duration of time. During this time, the orthosis moved in the opposite direction. In this paper, we explain the experimental setup and present some behavioral analyses across all subjects. Additionally, we present an average event-related potential analysis for one subject to offer insights into the data quality and the EEG activity caused by the error introduction. The dataset described herein is openly accessible. The aim of this study was to provide a dataset to the research community, particularly for the development of new methods in the asynchronous detection of erroneous events from the EEG. We are especially interested in the tactile and haptic-mediated recognition of errors, which has not yet been sufficiently investigated in the literature. We hope that the detailed description of the orthosis and the experiment will enable its reproduction and facilitate a systematic investigation of the influencing factors in the detection of erroneous behavior of assistive systems by a large community.Comment: Revised references to our datasets, general corrections to typos, and latex template format changes, Overall Content unchange

    Empirical Comparison of Distributed Source Localization Methods for Single-Trial Detection of Movement Preparation

    Get PDF
    The development of technologies for the treatment of movement disorders, like stroke, is still of particular interest in brain-computer interface (BCI) research. In this context, source localization methods (SLMs), that reconstruct the cerebral origin of brain activity measured outside the head, e.g., via electroencephalography (EEG), can add a valuable insight into the current state and progress of the treatment. However, in BCIs SLMs were often solely considered as advanced signal processing methods that are compared against other methods based on the classification performance alone. Though, this approach does not guarantee physiological meaningful results. We present an empirical comparison of three established distributed SLMs with the aim to use one for single-trial movement prediction. The SLMs wMNE, sLORETA, and dSPM were applied on data acquired from eight subjects performing voluntary arm movements. Besides the classification performance as quality measure, a distance metric was used to asses the physiological plausibility of the methods. For the distance metric, which is usually measured to the source position of maximum activity, we further propose a variant based on clusters that is better suited for the single-trial case in which several sources are likely and the actual maximum is unknown. The two metrics showed different results. The classification performance revealed no significant differences across subjects, indicating that all three methods are equally well-suited for single-trial movement prediction. On the other hand, we obtained significant differences in the distance measure, favoring wMNE even after correcting the distance with the number of reconstructed clusters. Further, distance results were inconsistent with the traditional method using the maximum, indicating that for wMNE the point of maximum source activity often did not coincide with the nearest activation cluster. In summary, the presented comparison might help users to select an appropriate SLM and to understand the implications of the selection. The proposed methodology pays attention to the particular properties of distributed SLMs and can serve as a framework for further comparisons

    Feel-Good Requirements: Neurophysiological and Psychological Design Criteria of Affective Touch for (Assistive) Robots

    Get PDF
    Previous research has shown the value of the sense of embodiment, i.e., being able to integrate objects into one's bodily self-representation, and its connection to (assistive) robotics. Especially, tactile interfaces seem essential to integrate assistive robots into one's body model. Beyond functional feedback, such as tactile force sensing, the human sense of touch comprises specialized nerves for affective signals, which transmit positive sensations during slow and low-force tactile stimulations. Since these signals are extremely relevant for body experience as well as social and emotional contacts but scarcely considered in recent assistive devices, this review provides a requirement analysis to consider affective touch in engineering design. By analyzing quantitative and qualitative information from engineering, cognitive psychology, and neuroscienctific research, requirements are gathered and structured. The resulting requirements comprise technical data such as desired motion or force/torque patterns and an evaluation of potential stimulation modalities as well as their relations to overall user experience, e.g., pleasantness and realism of the sensations. This review systematically considers the very specific characteristics of affective touch and the corresponding parts of the neural system to define design goals and criteria. Based on the analysis, design recommendations for interfaces mediating affective touch are derived. This includes a consideration of biological principles and human perception thresholds which are complemented by an analysis of technical possibilities. Finally, we outline which psychological factors can be satisfied by the mediation of affective touch to increase acceptance of assistive devices and outline demands for further research and development

    Feel-Good Requirements: Neurophysiological and Psychological Design Criteria of Affective Touch for (Assistive) Robots

    Get PDF
    Previous research has shown the value of the sense of embodiment, i.e., being able to integrate objects into one's bodily self-representation, and its connection to (assistive) robotics. Especially, tactile interfaces seem essential to integrate assistive robots into one's body model. Beyond functional feedback, such as tactile force sensing, the human sense of touch comprises specialized nerves for affective signals, which transmit positive sensations during slow and low-force tactile stimulations. Since these signals are extremely relevant for body experience as well as social and emotional contacts but scarcely considered in recent assistive devices, this review provides a requirement analysis to consider affective touch in engineering design. By analyzing quantitative and qualitative information from engineering, cognitive psychology, and neuroscienctific research, requirements are gathered and structured. The resulting requirements comprise technical data such as desired motion or force/torque patterns and an evaluation of potential stimulation modalities as well as their relations to overall user experience, e.g., pleasantness and realism of the sensations. This review systematically considers the very specific characteristics of affective touch and the corresponding parts of the neural system to define design goals and criteria. Based on the analysis, design recommendations for interfaces mediating affective touch are derived. This includes a consideration of biological principles and human perception thresholds which are complemented by an analysis of technical possibilities. Finally, we outline which psychological factors can be satisfied by the mediation of affective touch to increase acceptance of assistive devices and outline demands for further research and development

    Feel-good robotics: requirements on touch for embodiment in assistive robotics

    Get PDF
    The feeling of embodiment, i.e., experiencing the body as belonging to oneself and being able to integrate objects into one’s bodily self-representation, is a key aspect of human self-consciousness and has been shown to importantly shape human cognition. An extension of such feelings toward robots has been argued as being crucial for assistive technologies aiming at restoring, extending, or simulating sensorimotor functions. Empirical and theoretical work illustrates the importance of sensory feedback for the feeling of embodiment and also immersion; we focus on the the perceptual level of touch and the role of tactile feedback in various assistive robotic devices. We critically review how different facets of tactile perception in humans, i.e., affective, social, and self-touch, might influence embodiment. This is particularly important as current assistive robotic devices – such as prostheses, orthoses, exoskeletons, and devices for teleoperation–often limit touch low-density and spatially constrained haptic feedback, i.e., the mere touch sensation linked to an action. Here, we analyze, discuss, and propose how and to what degree tactile feedback might increase the embodiment of certain robotic devices, e.g., prostheses, and the feeling of immersion in human-robot interaction, e.g., in teleoperation. Based on recent findings from cognitive psychology on interactive processes between touch and embodiment, we discuss technical solutions for specific applications, which might be used to enhance embodiment, and facilitate the study of how embodiment might alter human-robot interactions. We postulate that high-density and large surface sensing and stimulation are required to foster embodiment of such assistive devices

    Deep and Surface Sensor Modalities for Myo-intent Detection

    Get PDF
    Electromyography is the gold-standard among sensors for prosthetic control. However, stable and reliable myocontrol remains an unsolved problem in the community. Amid improvements currently under investigation, one focuses on alternative or complementary sensors. In this study, we compare different techniques, recording surface and deep muscle activity. Ten subjects were involved in an experiment in which three different modalities were attached on their forearm: force myography, electro-impedance tomography and ultrasound. They were asked to perform wrist and grasp movements. For the first time, we evaluate and compare in an offline analysis these three different modalities while recording several hand gestures
    corecore