6,334 research outputs found

    Multimodal Signal Processing and Learning Aspects of Human-Robot Interaction for an Assistive Bathing Robot

    Full text link
    We explore new aspects of assistive living on smart human-robot interaction (HRI) that involve automatic recognition and online validation of speech and gestures in a natural interface, providing social features for HRI. We introduce a whole framework and resources of a real-life scenario for elderly subjects supported by an assistive bathing robot, addressing health and hygiene care issues. We contribute a new dataset and a suite of tools used for data acquisition and a state-of-the-art pipeline for multimodal learning within the framework of the I-Support bathing robot, with emphasis on audio and RGB-D visual streams. We consider privacy issues by evaluating the depth visual stream along with the RGB, using Kinect sensors. The audio-gestural recognition task on this new dataset yields up to 84.5%, while the online validation of the I-Support system on elderly users accomplishes up to 84% when the two modalities are fused together. The results are promising enough to support further research in the area of multimodal recognition for assistive social HRI, considering the difficulties of the specific task. Upon acceptance of the paper part of the data will be publicly available

    Introduction: The Fourth International Workshop on Epigenetic Robotics

    Get PDF
    As in the previous editions, this workshop is trying to be a forum for multi-disciplinary research ranging from developmental psychology to neural sciences (in its widest sense) and robotics including computational studies. This is a two-fold aim of, on the one hand, understanding the brain through engineering embodied systems and, on the other hand, building artificial epigenetic systems. Epigenetic contains in its meaning the idea that we are interested in studying development through interaction with the environment. This idea entails the embodiment of the system, the situatedness in the environment, and of course a prolonged period of postnatal development when this interaction can actually take place. This is still a relatively new endeavor although the seeds of the developmental robotics community were already in the air since the nineties (Berthouze and Kuniyoshi, 1998; Metta et al., 1999; Brooks et al., 1999; Breazeal, 2000; Kozima and Zlatev, 2000). A few had the intuition – see Lungarella et al. (2003) for a comprehensive review – that, intelligence could not be possibly engineered simply by copying systems that are “ready made” but rather that the development of the system fills a major role. This integration of disciplines raises the important issue of learning on the multiple scales of developmental time, that is, how to build systems that eventually can learn in any environment rather than program them for a specific environment. On the other hand, the hope is that robotics might become a new tool for brain science similarly to what simulation and modeling have become for the study of the motor system. Our community is still pretty much evolving and “under construction” and for this reason, we tried to encourage submissions from the psychology community. Additionally, we invited four neuroscientists and no roboticists for the keynote lectures. We received a record number of submissions (more than 50), and given the overall size and duration of the workshop together with our desire to maintain a single-track format, we had to be more selective than ever in the review process (a 20% acceptance rate on full papers). This is, if not an index of quality, at least an index of the interest that gravitates around this still new discipline

    Field test of multi-hop image sensing network prototype on a city-wide scale

    Get PDF
    Open Access funded by Chongqing University of Posts and Telecommuniocations Under a Creative Commons license, https://creativecommons.org/licenses/by-nc-nd/4.0/Wireless multimedia sensor network drastically stretches the horizon of traditional monitoring and surveillance systems, of which most existing research have utilised Zigbee or WiFi as the communication technology. Both technologies use ultra high frequencies (mainly 2.4 GHz) and suffer from relatively short transmission range (i.e. 100 m line-of-sight). The objective of this paper is to assess the feasibility and potential of transmitting image information using RF modules with lower frequencies (e.g. 433 MHz) in order to achieve a larger scale deployment such as a city scenario. Arduino platform is used for its low cost and simplicity. The details of hardware properties are elaborated in the article, followed by an investigation of optimum configurations for the system. Upon an initial range testing outcome of over 2000 m line-of-sight transmission distance, the prototype network has been installed in a real life city plot for further examination of performance. A range of suitable applications has been proposed along with suggestions for future research.Peer reviewe

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Teaching humanoid robotics by means of human teleoperation through RGB-D sensors

    Get PDF
    This paper presents a graduate course project on humanoid robotics offered by the University of Padova. The target is to safely lift an object by teleoperating a small humanoid. Students have to map human limbs into robot joints, guarantee the robot stability during the motion, and teleoperate the robot to perform the correct movement. We introduce the following innovative aspects with respect to classical robotic classes: i) the use of humanoid robots as teaching tools; ii) the simplification of the stable locomotion problem by exploiting the potential of teleoperation; iii) the adoption of a Project-Based Learning constructivist approach as teaching methodology. The learning objectives of both course and project are introduced and compared with the students\u2019 background. Design and constraints students have to deal with are reported, together with the amount of time they and their instructors dedicated to solve tasks. A set of evaluation results are provided in order to validate the authors\u2019 purpose, including the students\u2019 personal feedback. A discussion about possible future improvements is reported, hoping to encourage further spread of educational robotics in schools at all levels

    Multi-Platform Intelligent System for Multimodal Human-Computer Interaction

    Get PDF
    We present a flexible human--robot interaction architecture that incorporates emotions and moods to provide a natural experience for humans. To determine the emotional state of the user, information representing eye gaze and facial expression is combined with other contextual information such as whether the user is asking questions or has been quiet for some time. Subsequently, an appropriate robot behaviour is selected from a multi-path scenario. This architecture can be easily adapted to interactions with non-embodied robots such as avatars on a mobile device or a PC. We present the outcome of evaluating an implementation of our proposed architecture as a whole, and also of its modules for detecting emotions and questions. Results are promising and provide a basis for further development
    corecore