8,991 research outputs found
Training modalities in robot-mediated upper limb rehabilitation in stroke : A framework for classification based on a systematic review
© 2014 Basteris et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The work described in this manuscript was partially funded by the European project âSCRIPTâ Grant agreement no: 288698 (http://scriptproject.eu). SN has been hosted at University of Hertfordshire in a short-term scientific mission funded by the COST Action TD1006 European Network on Robotics for NeuroRehabilitationRobot-mediated post-stroke therapy for the upper-extremity dates back to the 1990s. Since then, a number of robotic devices have become commercially available. There is clear evidence that robotic interventions improve upper limb motor scores and strength, but these improvements are often not transferred to performance of activities of daily living. We wish to better understand why. Our systematic review of 74 papers focuses on the targeted stage of recovery, the part of the limb trained, the different modalities used, and the effectiveness of each. The review shows that most of the studies so far focus on training of the proximal arm for chronic stroke patients. About the training modalities, studies typically refer to active, active-assisted and passive interaction. Robot-therapy in active assisted mode was associated with consistent improvements in arm function. More specifically, the use of HRI features stressing active contribution by the patient, such as EMG-modulated forces or a pushing force in combination with spring-damper guidance, may be beneficial.Our work also highlights that current literature frequently lacks information regarding the mechanism about the physical human-robot interaction (HRI). It is often unclear how the different modalities are implemented by different research groups (using different robots and platforms). In order to have a better and more reliable evidence of usefulness for these technologies, it is recommended that the HRI is better described and documented so that work of various teams can be considered in the same group and categories, allowing to infer for more suitable approaches. We propose a framework for categorisation of HRI modalities and features that will allow comparing their therapeutic benefits.Peer reviewedFinal Published versio
The Brera Multi-scale Wavelet (BMW) ROSAT HRI source catalog. II: application to the HRI and first results
The wavelet detection algorithm (WDA) described in the accompanying paper by
Lazzati et al. is made suited for a fast and efficient analysis of images taken
with the High Resolution Imager (HRI) instrument on board the ROSAT satellite.
An extensive testing is carried out on the detection pipeline: HRI fields with
different exposure times are simulated and analysed in the same fashion as the
real data. Positions are recovered with few arcsecond errors, whereas fluxes
are within a factor of two from their input values in more than 90% of the
cases in the deepest images. At variance with the ``sliding-box'' detection
algorithms, the WDA provides also a reliable description of the source
extension, allowing for a complete search of e.g. supernova remnant or cluster
of galaxies in the HRI fields. A completeness analysis on simulated fields
shows that for the deepest exposures considered (~120 ks) a limiting flux of
\~3x10^{-15} erg/cm2/s can be reached over the entire field of view. We test
the algorithm on real HRI fields selected for their crowding and/or presence of
extended or bright sources (e.g. cluster of galaxies and of stars, supernova
remnants). We show that our algorithm compares favorably with other X-ray
detection algorithms such as XIMAGE and EXSAS. A complete catalog will result
from our analysis: it will consist of the Brera Multi-scale Wavelet Bright
Source Catalog (BMW-BSC) with sources detected with a significance >4.5 sigma
and of the Faint Source Catalog (BMW-FSC) with sources at >3.5 sigma. A
conservative estimate based on the extragalactic log(N)-log(S) indicates that
at least 16000 sources will be revealed in the complete analysis of the whole
HRI dataset.Comment: 6 pages, 11 PostScript figures, 1 gif figure, ApJ in pres
Incremental Learning for Robot Perception through HRI
Scene understanding and object recognition is a difficult to achieve yet
crucial skill for robots. Recently, Convolutional Neural Networks (CNN), have
shown success in this task. However, there is still a gap between their
performance on image datasets and real-world robotics scenarios. We present a
novel paradigm for incrementally improving a robot's visual perception through
active human interaction. In this paradigm, the user introduces novel objects
to the robot by means of pointing and voice commands. Given this information,
the robot visually explores the object and adds images from it to re-train the
perception module. Our base perception module is based on recent development in
object detection and recognition using deep learning. Our method leverages
state of the art CNNs from off-line batch learning, human guidance, robot
exploration and incremental on-line learning
Learning robot policies using a high-level abstraction persona-behaviour simulator
2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksCollecting data in Human-Robot Interaction for training learning agents might be a hard task to accomplish. This is especially true when the target users are older adults with dementia since this usually requires hours of interactions and puts quite a lot of workload on the user. This paper addresses the problem of importing the Personas technique from HRI to create fictional patientsâ profiles. We propose a Persona-Behaviour Simulator tool that provides, with high-level abstraction, userâs actions during an HRI task, and we apply it to cognitive training exercises for older adults with dementia. It consists of a Persona Definition that characterizes a patient along four dimensions and a Task Engine that provides information regarding the task complexity. We build a simulated environment where the high-level userâs actions are provided by the simulator and the robot initial policy is learned using a Q-learning algorithm. The results show that the current simulator provides a reasonable initial policy for a defined Persona profile. Moreover, the learned robot assistance has proved to be robust to potential changes in the userâs behaviour. In this way, we can speed up the fine-tuning of the rough policy during the real interactions to tailor the assistance to the given user. We believe the presented approach can be easily extended to account for other types of HRI tasks; for example, when input data is required to train a learning algorithm, but data collection is very expensive or unfeasible. We advocate that simulation is a convenient tool in these cases.Peer ReviewedPostprint (author's final draft
Multimodal Signal Processing and Learning Aspects of Human-Robot Interaction for an Assistive Bathing Robot
We explore new aspects of assistive living on smart human-robot interaction
(HRI) that involve automatic recognition and online validation of speech and
gestures in a natural interface, providing social features for HRI. We
introduce a whole framework and resources of a real-life scenario for elderly
subjects supported by an assistive bathing robot, addressing health and hygiene
care issues. We contribute a new dataset and a suite of tools used for data
acquisition and a state-of-the-art pipeline for multimodal learning within the
framework of the I-Support bathing robot, with emphasis on audio and RGB-D
visual streams. We consider privacy issues by evaluating the depth visual
stream along with the RGB, using Kinect sensors. The audio-gestural recognition
task on this new dataset yields up to 84.5%, while the online validation of the
I-Support system on elderly users accomplishes up to 84% when the two
modalities are fused together. The results are promising enough to support
further research in the area of multimodal recognition for assistive social
HRI, considering the difficulties of the specific task. Upon acceptance of the
paper part of the data will be publicly available
On the Integration of Adaptive and Interactive Robotic Smart Spaces
© 2015 Mauro Dragone et al.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. (CC BY-NC-ND 3.0)Enabling robots to seamlessly operate as part of smart spaces is an important and extended challenge for robotics R&D and a key enabler for a range of advanced robotic applications, such as AmbientAssisted Living (AAL) and home automation. The integration of these technologies is currently being pursued from two largely distinct view-points: On the one hand, people-centred initiatives focus on improving the userâs acceptance by tackling human-robot interaction (HRI) issues, often adopting a social robotic approach, and by giving to the designer and - in a limited degree â to the final user(s), control on personalization and product customisation features. On the other hand, technologically-driven initiatives are building impersonal but intelligent systems that are able to pro-actively and autonomously adapt their operations to fit changing requirements and evolving usersâ needs,but which largely ignore and do not leverage human-robot interaction and may thus lead to poor user experience and user acceptance. In order to inform the development of a new generation of smart robotic spaces, this paper analyses and compares different research strands with a view to proposing possible integrated solutions with both advanced HRI and online adaptation capabilities.Peer reviewe
Speech-Gesture Mapping and Engagement Evaluation in Human Robot Interaction
A robot needs contextual awareness, effective speech production and
complementing non-verbal gestures for successful communication in society. In
this paper, we present our end-to-end system that tries to enhance the
effectiveness of non-verbal gestures. For achieving this, we identified
prominently used gestures in performances by TED speakers and mapped them to
their corresponding speech context and modulated speech based upon the
attention of the listener. The proposed method utilized Convolutional Pose
Machine [4] to detect the human gesture. Dominant gestures of TED speakers were
used for learning the gesture-to-speech mapping. The speeches by them were used
for training the model. We also evaluated the engagement of the robot with
people by conducting a social survey. The effectiveness of the performance was
monitored by the robot and it self-improvised its speech pattern on the basis
of the attention level of the audience, which was calculated using visual
feedback from the camera. The effectiveness of interaction as well as the
decisions made during improvisation was further evaluated based on the
head-pose detection and interaction survey.Comment: 8 pages, 9 figures, Under review in IRC 201
- âŠ