2,159 research outputs found

    Adaptive modality selection algorithm in robot-assisted cognitive training

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Interaction of socially assistive robots with users is based on social cues coming from different interaction modalities, such as speech or gestures. However, using all modalities at all times may be inefficient as it can overload the user with redundant information and increase the task completion time. Additionally, users may favor certain modalities over the other as a result of their disability or personal preference. In this paper, we propose an Adaptive Modality Selection (AMS) algorithm that chooses modalities depending on the state of the user and the environment, as well as user preferences. The variables that describe the environment and the user state are defined as resources, and we posit that modalities are successful if certain resources possess specific values during their use. Besides the resources, the proposed algorithm takes into account user preferences which it learns while interacting with users. We tested our algorithm in simulations, and we implemented it on a robotic system that provides cognitive training, specifically Sequential memory exercises. Experimental results show that it is possible to use only a subset of available modalities without compromising the interaction. Moreover, we see a trend for users to perform better when interacting with a system with implemented AMS algorithm.Peer ReviewedPostprint (author's final draft

    Adaptable multimodal interaction framework for robot-assisted cognitive training

    Get PDF
    The size of the population with cognitive impairment is increasing worldwide, and socially assistive robotics offers a solution to the growing demand for professional carers. Adaptation to users generates more natural, human-like behavior that may be crucial for a wider robot acceptance. The focus of this work is on robot-assisted cognitive training of the patients that suffer from mild cognitive impairment (MCI) or Alzheimer. We propose a framework that adjusts the level of robot assistance and the way the robot actions are executed, according to the user input. The actions can be performed using any of the following modalities: speech, gesture, and display, or their combination. The choice of modalities depends on the availability of the required resources. The memory state of the user was implemented as a Hidden Markov Model, and it was used to determine the level of robot assistance. A pilot user study was performed to evaluate the effects of the proposed framework on the quality of interaction with the robot.Peer ReviewedPostprint (author's final draft

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers

    Introducing CARESSER: A framework for in situ learning robot social assistance from expert knowledge and demonstrations

    Get PDF
    Socially assistive robots have the potential to augment and enhance therapist’s effectiveness in repetitive tasks such as cognitive therapies. However, their contribution has generally been limited as domain experts have not been fully involved in the entire pipeline of the design process as well as in the automatisation of the robots’ behaviour. In this article, we present aCtive leARning agEnt aSsiStive bEhaviouR (CARESSER), a novel framework that actively learns robotic assistive behaviour by leveraging the therapist’s expertise (knowledge-driven approach) and their demonstrations (data-driven approach). By exploiting that hybrid approach, the presented method enables in situ fast learning, in a fully autonomous fashion, of personalised patient-specific policies. With the purpose of evaluating our framework, we conducted two user studies in a daily care centre in which older adults affected by mild dementia and mild cognitive impairment (N = 22) were requested to solve cognitive exercises with the support of a therapist and later on of a robot endowed with CARESSER. Results showed that: (i) the robot managed to keep the patients’ performance stable during the sessions even more so than the therapist; (ii) the assistance offered by the robot during the sessions eventually matched the therapist’s preferences. We conclude that CARESSER, with its stakeholder-centric design, can pave the way to new AI approaches that learn by leveraging human–human interactions along with human expertise, which has the benefits of speeding up the learning process, eliminating the need for the design of complex reward functions, and finally avoiding undesired states.Peer ReviewedPostprint (published version

    Analysis of Human Gait Using Hybrid EEG-fNIRS-Based BCI System: A Review

    Get PDF
    Human gait is a complex activity that requires high coordination between the central nervous system, the limb, and the musculoskeletal system. More research is needed to understand the latter coordination\u27s complexity in designing better and more effective rehabilitation strategies for gait disorders. Electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) are among the most used technologies for monitoring brain activities due to portability, non-invasiveness, and relatively low cost compared to others. Fusing EEG and fNIRS is a well-known and established methodology proven to enhance brain–computer interface (BCI) performance in terms of classification accuracy, number of control commands, and response time. Although there has been significant research exploring hybrid BCI (hBCI) involving both EEG and fNIRS for different types of tasks and human activities, human gait remains still underinvestigated. In this article, we aim to shed light on the recent development in the analysis of human gait using a hybrid EEG-fNIRS-based BCI system. The current review has followed guidelines of preferred reporting items for systematic reviews and meta-Analyses (PRISMA) during the data collection and selection phase. In this review, we put a particular focus on the commonly used signal processing and machine learning algorithms, as well as survey the potential applications of gait analysis. We distill some of the critical findings of this survey as follows. First, hardware specifications and experimental paradigms should be carefully considered because of their direct impact on the quality of gait assessment. Second, since both modalities, EEG and fNIRS, are sensitive to motion artifacts, instrumental, and physiological noises, there is a quest for more robust and sophisticated signal processing algorithms. Third, hybrid temporal and spatial features, obtained by virtue of fusing EEG and fNIRS and associated with cortical activation, can help better identify the correlation between brain activation and gait. In conclusion, hBCI (EEG + fNIRS) system is not yet much explored for the lower limb due to its complexity compared to the higher limb. Existing BCI systems for gait monitoring tend to only focus on one modality. We foresee a vast potential in adopting hBCI in gait analysis. Imminent technical breakthroughs are expected using hybrid EEG-fNIRS-based BCI for gait to control assistive devices and Monitor neuro-plasticity in neuro-rehabilitation. However, although those hybrid systems perform well in a controlled experimental environment when it comes to adopting them as a certified medical device in real-life clinical applications, there is still a long way to go

    Enhancement of Robot-Assisted Rehabilitation Outcomes of Post-Stroke Patients Using Movement-Related Cortical Potential

    Get PDF
    Post-stroke rehabilitation is essential for stroke survivors to help them regain independence and to improve their quality of life. Among various rehabilitation strategies, robot-assisted rehabilitation is an efficient method that is utilized more and more in clinical practice for motor recovery of post-stroke patients. However, excessive assistance from robotic devices during rehabilitation sessions can make patients perform motor training passively with minimal outcome. Towards the development of an efficient rehabilitation strategy, it is necessary to ensure the active participation of subjects during training sessions. This thesis uses the Electroencephalography (EEG) signal to extract the Movement-Related Cortical Potential (MRCP) pattern to be used as an indicator of the active engagement of stroke patients during rehabilitation training sessions. The MRCP pattern is also utilized in designing an adaptive rehabilitation training strategy that maximizes patients’ engagement. This project focuses on the hand motor recovery of post-stroke patients using the AMADEO rehabilitation device (Tyromotion GmbH, Austria). AMADEO is specifically developed for patients with fingers and hand motor deficits. The variations in brain activity are analyzed by extracting the MRCP pattern from the acquired EEG data during training sessions. Whereas, physical improvement in hand motor abilities is determined by two methods. One is clinical tests namely Fugl-Meyer Assessment (FMA) and Motor Assessment Scale (MAS) which include FMA-wrist, FMA-hand, MAS-hand movements, and MAS-advanced hand movements’ tests. The other method is the measurement of hand-kinematic parameters using the AMADEO assessment tool which contains hand strength measurements during flexion (force-flexion), and extension (force-extension), and Hand Range of Movement (HROM)
    • …
    corecore