3,475 research outputs found

    Active learning based on computer vision and human-robot interaction for the user profiling and behavior personalization of an autonomous social robot

    Get PDF
    Social robots coexist with humans in situations where they have to exhibit proper communication skills. Since users may have different features and communicative procedures, personalizing human-robot interactions is essential for the success of these interactions. This manuscript presents Active Learning based on computer vision and human-robot interaction for user recognition and profiling to personalize robot behavior. The system identifies people using Intel-face-detection-retail-004 and FaceNet for face recognition and obtains users" information through interaction. The system aims to improve human-robot interaction by (i) using online learning to allow the robot to identify the users and (ii) retrieving users' information to fill out their profiles and adapt the robot's behavior. Since user information is necessary for adapting the robot for each interaction, we hypothesized that users would consider creating their profile by interacting with the robot more entertaining and easier than taking a survey. We validated our hypothesis with three scenarios: the participants completed their profiles using an online survey, by interacting with a dull robot, or with a cheerful robot. The results show that participants gave the cheerful robot a higher usability score (82.14/100 points), and they were more entertained while creating their profiles with the cheerful robot than in the other scenarios. Statistically significant differences in the usability were found between the scenarios using the robot and the scenario that involved the online survey. Finally, we show two scenarios in which the robot interacts with a known user and an unknown user to demonstrate how it adapts to the situation.The research leading to these results has received funding from the projects: Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES), RTI2018-096338-B-I00, funded by the Spain Ministry of Science, Innovation and Universities; Robots sociales para mitigar la soledad y el aislamiento en mayores (SOROLI), PID2021-123941OA-I00, funded by Agencia Estatal de Investigación (AEI), Spain Ministry of Science and Innovation. This publication is part of the R&D&I project PLEC2021-007819 funded by MCIN/AEI/10.13039/5011000-11033 and by the European Union NextGenerationEU/PRTR

    A Systematic Review of Adaptivity in Human-Robot Interaction

    Get PDF
    As the field of social robotics is growing, a consensus has been made on the design and implementation of robotic systems that are capable of adapting based on the user actions. These actions may be based on their emotions, personality or memory of past interactions. Therefore, we believe it is significant to report a review of the past research on the use of adaptive robots that have been utilised in various social environments. In this paper, we present a systematic review on the reported adaptive interactions across a number of domain areas during Human-Robot Interaction and also give future directions that can guide the design of future adaptive social robots. We conjecture that this will help towards achieving long-term applicability of robots in various social domains

    Pepper4Museum: Towards a Human-like Museum Guide

    Get PDF
    With the recent advances in technology, new ways to engage visitors in a museum have been proposed. Relevant examples range from the simple use of mobile apps and interactive displays to virtual and augmented reality settings. Recently social robots have been used as a solution to engage visitors in museum tours, due to their ability to interact with humans naturally and familiarly. In this paper, we present our preliminary work on the use of a social robot, Pepper in this case, as an innovative approach to engaging people during museum visiting tours. To this aim, we endowed Pepper with a vision module that allows it to perceive the visitor and the artwork he is looking at, as well as estimating his age and gender. These data are used to provide the visitor with recommendations about artworks the user might like to see during the visit. We tested the proposed approach in our research lab and preliminary experiments show its feasibility

    Service robots in hospitals : new perspectives on niche evolution and technology affordances

    Get PDF
    Changing demands in society and the limited capabilities of health systems have paved the way for robots to move out of industrial contexts and enter more human-centered environments such as health care. We explore the shared beliefs and concerns of health workers on the introduction of autonomously operating service robots in hospitals or professional care facilities. By means of Q-methodology, a mixed research approach specifically designed for studying subjective thought patterns, we identify five potential end-user niches, each of which perceives different affordances and outcomes from using service robots in their working environment. Our findings allow for better understanding resistance and susceptibility of different users in a hospital and encourage managerial awareness of varying demands, needs, and surrounding conditions that a service robot must contend with. We also discuss general insights into presenting the Q-methodology results and how an affordance-based view could inform the adoption, appropriation, and adaptation of emerging technologies

    Developing an Autonomous Mobile Robotic Device for Monitoring and Assisting Older People

    Get PDF
    A progressive increase of the elderly population in the world has required technological solutions capable of improving the life prospects of people suffering from senile dementias such as Alzheimer's. Socially Assistive Robotics (SAR) in the research field of elderly care is a solution that can ensure, through observation and monitoring of behaviors, their safety and improve their physical and cognitive health. A social robot can autonomously and tirelessly monitor a person daily by providing assistive tasks such as remembering to take medication and suggesting activities to keep the assisted active both physically and cognitively. However, many projects in this area have not considered the preferences, needs, personality, and cognitive profiles of older people. Moreover, other projects have developed specific robotic applications making it difficult to reuse and adapt them on other hardware devices and for other different functional contexts. This thesis presents the development of a scalable, modular, multi-tenant robotic application and its testing in real-world environments. This work is part of the UPA4SAR project ``User-centered Profiling and Adaptation for Socially Assistive Robotics''. The UPA4SAR project aimed to develop a low-cost robotic application for faster deployment among the elderly population. The architecture of the proposed robotic system is modular, robust, and scalable due to the development of functionality in microservices with event-based communication. To improve robot acceptance the functionalities, enjoyed through microservices, adapt the robot's behaviors based on the preferences and personality of the assisted person. A key part of the assistance is the monitoring of activities that are recognized through deep neural network models proposed in this work. The final experimentation of the project carried out in the homes of elderly volunteers was performed with complete autonomy of the robotic system. Daily care plans customized to the person's needs and preferences were executed. These included notification tasks to remember when to take medication, tasks to check if basic nutrition activities were accomplished, entertainment and companionship tasks with games, videos, music for cognitive and physical stimulation of the patient

    Towards player’s affective and behavioral visual cues as drives to game adaptation

    Get PDF
    Recent advances in emotion and affect recognition can play a crucial role in game technology. Moving from the typical game controls to controls generated from free gestures is already in the market. Higher level controls, however, can also be motivated by player’s affective and cognitive behavior itself, during gameplay. In this paper, we explore player’s behavior, as captured by computer vision techniques, and player’s details regarding his own experience and profile. The objective of the current research is game adaptation aiming at maximizing player enjoyment. To this aim, the ability to infer player engagement and frustration, along with the degree of challenge imposed by the game is explored. The estimated levels of the induced metrics can feed an engine’s artificial intelligence, allowing for game adaptation.This research was supported by the FP7 ICT project SIREN (project no: 258453)peer-reviewe

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    An emotion and memory model for social robots : a long-term interaction

    Get PDF
    In this thesis, we investigate the role of emotions and memory in social robotic companions. In particular, our aim is to study the effect of an emotion and memory model towards sustaining engagement and promoting learning in a long-term interaction. Our Emotion and Memory model was based on how humans create memory under various emotional events/states. The model enabled the robot to create a memory account of user's emotional events during a long-term child-robot interaction. The robot later adapted its behaviour through employing the developed memory in the following interactions with the users. The model also had an autonomous decision-making mechanism based on reinforcement learning to select behaviour according to the user preference measured through user's engagement and learning during the task. The model was implemented on the NAO robot in two different educational setups. Firstly, to promote user's vocabulary learning and secondly, to inform how to calculate area and perimeter of regular and irregular shapes. We also conducted multiple long-term evaluations of our model with children at the primary schools to verify its impact on their social engagement and learning. Our results showed that the behaviour generated based on our model was able to sustain social engagement. Additionally, it also helped children to improve their learning. Overall, the results highlighted the benefits of incorporating memory during child-Robot Interaction for extended periods of time. It promoted personalisation and reflected towards creating a child-robot social relationship in a long-term interaction
    corecore