208 research outputs found

    VIMES : A Wearable Memory Assistance System for Automatic Information Retrieval

    Get PDF
    The advancement of artificial intelligence and wearable computing triggers the radical innovation of cognitive applications. In this work, we propose VIMES, an augmented reality-based memory assistance system that helps recall declarative memory, such as whom the user meets and what they chat. Through a collaborative method with 20 participants, we design VIMES, a system that runs on smartglasses, takes the first-person audio and video as input, and extracts personal profiles and event information to display on the embedded display or a smartphone. We perform an extensive evaluation with 50 participants to show the effectiveness of VIMES for memory recall. VIMES outperforms (90% memory accuracy) other traditional methods such as self-recall (34%) while offering the best memory experience (Vividness, Coherence, and Visual Perspective all score over 4/5). The user study results show that most participants find VIMES useful (3.75/5) and easy to use (3.46/5).Peer reviewe

    Study and development of sensorimotor interfaces for robotic human augmentation

    Get PDF
    This thesis presents my research contribution to robotics and haptics in the context of human augmentation. In particular, in this document, we are interested in bodily or sensorimotor augmentation, thus the augmentation of humans by supernumerary robotic limbs (SRL). The field of sensorimotor augmentation is new in robotics and thanks to the combination with neuroscience, great leaps forward have already been made in the past 10 years. All of the research work I produced during my Ph.D. focused on the development and study of fundamental technology for human augmentation by robotics: the sensorimotor interface. This new concept is born to indicate a wearable device which has two main purposes, the first is to extract the input generated by the movement of the user's body, and the second to provide the somatosensory system of the user with an haptic feedback. This thesis starts with an exploratory study of integration between robotic and haptic devices, intending to combine state-of-the-art devices. This allowed us to realize that we still need to understand how to improve the interface that will allow us to feel the agency when using an augmentative robot. At this point, the path of this thesis forks into two alternative ways that have been adopted to improve the interaction between the human and the robot. In this regard, the first path we presented tackles two aspects conerning the haptic feedback of sensorimotor interfaces, which are the choice of the positioning and the effectiveness of the discrete haptic feedback. In the second way we attempted to lighten a supernumerary finger, focusing on the agility of use and the lightness of the device. One of the main findings of this thesis is that haptic feedback is considered to be helpful by stroke patients, but this does not mitigate the fact that the cumbersomeness of the devices is a deterrent to their use. Preliminary results here presented show that both the path we chose to improve sensorimotor augmentation worked: the presence of the haptic feedback improves the performance of sensorimotor interfaces, the co-positioning of haptic feedback and the input taken from the human body can improve the effectiveness of these interfaces, and creating a lightweight version of a SRL is a viable solution for recovering the grasping function

    TeLeMan: Teleoperation for Legged Robot Loco-Manipulation using Wearable IMU-based Motion Capture

    Get PDF
    Human life is invaluable. When dangerous or life-threatening tasks need to be completed, robotic platforms could be ideal in replacing human operators. Such a task that we focus on in this work is the Explosive Ordnance Disposal. Robot telepresence has the potential to provide safety solutions, given that mobile robots have shown robust capabilities when operating in several environments. However, autonomy may be challenging and risky at this stage, compared to human operation. Teleoperation could be a compromise between full robot autonomy and human presence. In this paper, we present a relatively cheap solution for telepresence and robot teleoperation, to assist with Explosive Ordnance Disposal, using a legged manipulator (i.e., a legged quadruped robot, embedded with a manipulator and RGB-D sensing). We propose a novel system integration for the non-trivial problem of quadruped manipulator whole-body control. Our system is based on a wearable IMU-based motion capture system that is used for teleoperation and a VR headset for visual telepresence. We experimentally validate our method in real-world, for loco-manipulation tasks that require whole-body robot control and visual telepresence

    A Modular Mobile Robotic Platform to Assist People with Different Degrees of Disability

    Get PDF
    Robotics to support elderly people in living independently and to assist disabled people in carrying out the activities of daily living independently have demonstrated good results. Basically, there are two approaches: one of them is based on mobile robot assistants, such as Care-O-bot, PR2, and Tiago, among others; the other one is the use of an external robotic arm or a robotic exoskeleton fixed or mounted on a wheelchair. In this paper, a modular mobile robotic platform to assist moderately and severely impaired people based on an upper limb robotic exoskeleton mounted on a robotized wheel chair is presented. This mobile robotic platform can be customized for each user’s needs by exploiting its modularity. Finally, experimental results in a simulated home environment with a living room and a kitchen area, in order to simulate the interaction of the user with different elements of a home, are presented. In this experiment, a subject suffering from multiple sclerosis performed different activities of daily living (ADLs) using the platform in front of a group of clinicians composed of nurses, doctors, and occupational therapists. After that, the subject and the clinicians replied to a usability questionnaire. The results were quite good, but two key factors arose that need to be improved: the complexity and the cumbersome aspect of the platform.This work was supported by the AIDE project through Grant Agreement No. 645322 of the European Commission, by the Conselleria d’Educacio, Cultura i Esport of Generalitat Valenciana, by the European Social Fund—Investing in your future, through the grant ACIF 2018/214, and by the Promoción de empleo joven e implantación de garantía juvenil en I+D+I 2018 through the grant PEJ2018-002670-A

    I hear you eat and speak: automatic recognition of eating condition and food type, use-cases, and impact on ASR performance

    Get PDF
    We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient

    Intelligent Knee Sleeves: A Real-time Multimodal Dataset for 3D Lower Body Motion Estimation Using Smart Textile

    Full text link
    The kinematics of human movements and locomotion are closely linked to the activation and contractions of muscles. To investigate this, we present a multimodal dataset with benchmarks collected using a novel pair of Intelligent Knee Sleeves (Texavie MarsWear Knee Sleeves) for human pose estimation. Our system utilizes synchronized datasets that comprise time-series data from the Knee Sleeves and the corresponding ground truth labels from the visualized motion capture camera system. We employ these to generate 3D human models solely based on the wearable data of individuals performing different activities. We demonstrate the effectiveness of this camera-free system and machine learning algorithms in the assessment of various movements and exercises, including extension to unseen exercises and individuals. The results show an average error of 7.21 degrees across all eight lower body joints when compared to the ground truth, indicating the effectiveness and reliability of the Knee Sleeve system for the prediction of different lower body joints beyond the knees. The results enable human pose estimation in a seamless manner without being limited by visual occlusion or the field of view of cameras. Our results show the potential of multimodal wearable sensing in a variety of applications from home fitness to sports, healthcare, and physical rehabilitation focusing on pose and movement estimation.Comment: Accepted by Thirty-seventh Conference on Neural Information Processing Systems (Neurips) D&B Trac

    Experimental evaluation of a multi-modal user interface for a robotic service

    Get PDF
    This paper reports the experimental evaluation of a Multi- Modal User Interface (MMUI) designed to enhance the user experience in terms of service usability and to increase acceptability of assistive robot systems by elderly users. The MMUI system offers users two main modalities to send commands: they are a GUI, usually running on the tablet attached to the robot, and a SUI, with a wearable microphone on the user. The study involved fifteen participants, aged between 70 and 89 years old, who were invited to interact with a robotic platform customized for providing every-day care and services to the elderly. The experimental task for the participants was to order a meal from three different menus using any interaction modality they liked. Quantitative and qualitative data analyses demonstrate a positive evaluation by users and show that the multi-modal means of interaction can help to make elderly-robot interaction more flexible and natural

    Towards The Development of A Wearable Feedback System for Monitoring the Activities of the Upper-Extremities

    Get PDF
    Background Body motion data registered by wearable sensors can provide objective feedback to patients on the effectiveness of the rehabilitation interventions they undergo. Such a feedback may motivate patients to keep increasing the amount of exercise they perform, thus facilitating their recovery during physical rehabilitation therapy. In this work, we propose a novel wearable and affordable system which can predict different postures of the upper-extremities by classifying force myographic (FMG) signals of the forearm in real-time. Methods An easy to use force sensor resistor (FSR) strap to extract the upper-extremities FMG signals was prototyped. The FSR strap was designed to be placed on the proximal portion of the forearm and capture the activities of the main muscle groups with eight force input channels. The non-kernel based extreme learning machine (ELM) classifier with sigmoid based function was implemented for real-time classification due to its fast learning characteristics. A test protocol was designed to classify in real-time six upper-extremities postures that are needed to successfully complete a drinking task, which is a functional exercise often used in constraint-induced movement therapy. Six healthy volunteers participated in the test. Each participant repeated the drinking task three times. FMG data and classification results were recorded for analysis. Results The obtained results confirmed that the FMG data captured from the FSR strap produced distinct patterns for the selected upper-extremities postures of the drinking task. With the use of the non-kernel based ELM, the postures associated to the drinking task were predicted in real-time with an average overall accuracy of 92.33% and standard deviation of 3.19%. Conclusions This study showed that the proposed wearable FSR strap was able to detect eight FMG signals from the forearm. In addition, the implemented ELM algorithm was able to correctly classify in real-time six postures associated to the drinking task. The obtained results therefore point out that the proposed system has potential for providing instant feedback during functional rehabilitation exercises

    Using Deep Learning for Task and Tremor Type Classification in People with Parkinson’s Disease

    Get PDF
    Hand tremor is one of the dominating symptoms of Parkinson’s disease (PD), which significantly limits activities of daily living. Along with medications, wearable devices have been proposed to suppress tremor. However, suppressing tremor without interfering with voluntary motion remains challenging and improvements are needed. The main goal of this work was to design algorithms for the automatic identification of the tremor type and voluntary motions, using only surface electromyography (sEMG) data. Towards this goal, a bidirectional long short-term memory (BiLSTM) algorithm was implemented that uses sEMG data to identify the motion and tremor type of people living with PD when performing a task. Moreover, in order to automate the training process, hyperparamter selection was performed using a regularized evolutionary algorithm. The results show that the accuracy of task classification among 15 people living with PD was (Formula presented.), and the accuracy of tremor classification was (Formula presented.). Both models performed significantly above chance levels (20% and 33% for task and tremor classification, respectively). Thus, it was concluded that the trained models, based on using purely sEMG signals, could successfully identify the task and tremor types
    corecore