2,596 research outputs found

    Architecture de contrôle d'un robot de téléprésence et d'assistance aux soins à domicile

    Get PDF
    La population vieillissante provoque une croissance des coûts pour les soins hospitaliers. Pour éviter que ces coûts deviennent trop importants, des robots de téléprésence et d’assistance aux soins et aux activités quotidiennes sont envisageables afin de maintenir l’autonomie des personnes âgées à leur domicile. Cependant, les robots actuels possèdent individuellement des fonctionnalités intéressantes, mais il serait bénéfique de pouvoir réunir leurs capacités. Une telle intégration est possible par l’utilisation d’une architecture décisionnelle permettant de jumeler des capacités de navigation, de suivi de la voix et d’acquisition d’informations afin d’assister l’opérateur à distance, voir même s’y substituer. Pour ce projet, l’architecture de contrôle HBBA (Hybrid Behavior-Based Architecture) sert de pilier pour unifier les bibliothèques requises, RTAB-Map (Real-Time Appearance-Based Mapping) et ODAS (Open embeddeD Audition System), pour réaliser cette intégration. RTAB-Map est une bibliothèque permettant la localisation et la cartographie simultanée selon différentes configurations de capteurs tout en respectant les contraintes de traitement en ligne. ODAS est une bibliothèque permettant la localisation, le suivi et la séparation de sources sonores en milieux réels. Les objectifs sont d’évaluer ces capacités en environnement réel en déployant la plateforme robotique dans différents domiciles, et d’évaluer le potentiel d’une telle intégration en réalisant un scénario autonome d’assistance à la prise de mesure de signes vitaux. La plateforme robotique Beam+ est utilisée pour réaliser cette intégration. La plateforme est bonifiée par l’ajout d’une caméra RBG-D, d’une matrice de huit microphones, d’un ordinateur et de batteries supplémentaires. L’implémentation résultante, nommée SAM, a été évaluée dans 10 domiciles pour caractériser la navigation et le suivi de conversation. Les résultats de la navigation suggèrent que les capacités de navigation fonctionnent selon certaines contraintes propres au positionement des capteurs et des conditions environnementales, impliquant la nécessité d’intervention de l’opérateur pour compenser. La modalité de suivi de la voix fonctionne bien dans des environnements calmes, mais des améliorations sont requises en milieu bruyant. Incidemment, la réalisation d’un scénario d’assistance complètement autonome est fonction des performances de la combinaison de ces fonctionnalités, ce qui rend difficile d’envisager le retrait complet d’un opérateur dans la boucle de décision. L’intégration des modalités avec HBBA s’avère possible et concluante, et ouvre la porte à la réutilisabilité de l’implémentation sur d’autres plateformes robotiques qui pourraient venir compenser face aux lacunes observées sur la mise en œuvre avec la plateforme Beam+

    Sharing Stress With a Robot: What Would a Robot Say?

    Get PDF
    With the prevalence of mental health problems today, designing human-robot interaction for mental health intervention is not only possible, but critical. The current experiment examined how three types of robot disclosure (emotional, technical, and by-proxy) affect robot perception and human disclosure behavior during a stress-sharing activity. Emotional robot disclosure resulted in the lowest robot perceived safety. Post-hoc analysis revealed that increased perceived stress predicted reduced human disclosure, user satisfaction, robot likability, and future robot use. Negative attitudes toward robots also predicted reduced intention for future robot use. This work informs on the possible design of robot disclosure, as well as how individual attributes, such as perceived stress, can impact human robot interaction in a mental health context

    Human-Machine Communication: Complete Volume. Volume 1

    Get PDF
    This is the complete volume of HMC Volume 1

    Does Repetition Affect Acceptance? A Social Robot Adoption Model for Technologically-Savvy Users in the Caribbean

    Get PDF
    There is little research on use and adoption factors for social robots in the Caribbean. In one pilot study, the Zenbo companion robot was used to evaluate potential social robot use in a Caribbean setting. An informal observation from that study was the existence of communication failure–participants frequently repeated commands to the robot. Based on this observation, we have undertaken this study to identify the factors that affect robot adoption among technologically-savvy Caribbean users (undergraduate Computer Science and Information Technology (IT) students) and create a technology adoption model for this type of user. Our model shows that communication failure, manifested as repetition, has no effect on technology acceptance. Additionally, social attitudes towards robots, like the perception of competence and warmth, also have no effect on adoption. This social robot adoption model is the first of its kind for the Caribbean and helps contextualize factors that can affect social robots’ adoption in the region

    Developing an Affect-Aware Rear-Projected Robotic Agent

    Get PDF
    Social (or Sociable) robots are designed to interact with people in a natural and interpersonal manner. They are becoming an integrated part of our daily lives and have achieved positive outcomes in several applications such as education, health care, quality of life, entertainment, etc. Despite significant progress towards the development of realistic social robotic agents, a number of problems remain to be solved. First, current social robots either lack enough ability to have deep social interaction with human, or they are very expensive to build and maintain. Second, current social robots have yet to reach the full emotional and social capabilities necessary for rich and robust interaction with human beings. To address these problems, this dissertation presents the development of a low-cost, flexible, affect-aware rear-projected robotic agent (called ExpressionBot), that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication. The developed robotic platform uses state-of-the-art character animation technologies to create an animated human face (aka avatar) that is capable of showing facial expressions, realistic eye movement, and accurate visual speech, and then project this avatar onto a face-shaped translucent mask. The mask and the projector are then rigged onto a neck mechanism that can move like a human head. Since an animation is projected onto a mask, the robotic face is highly flexible research tool, mechanically simple, and low-cost to design, build and maintain compared with mechatronic and android faces. The results of our comprehensive Human-Robot Interaction (HRI) studies illustrate the benefits and values of the proposed rear-projected robotic platform over a virtual-agent with the same animation displayed on a 2D computer screen. The results indicate that ExpressionBot is well accepted by users, with some advantages in expressing facial expressions more accurately and perceiving mutual eye gaze contact. To improve social capabilities of the robot and create an expressive and empathic social agent (affect-aware) which is capable of interpreting users\u27 emotional facial expressions, we developed a new Deep Neural Networks (DNN) architecture for Facial Expression Recognition (FER). The proposed DNN was initially trained on seven well-known publicly available databases, and obtained significantly better than, or comparable to, traditional convolutional neural networks or other state-of-the-art methods in both accuracy and learning time. Since the performance of the automated FER system highly depends on its training data, and the eventual goal of the proposed robotic platform is to interact with users in an uncontrolled environment, a database of facial expressions in the wild (called AffectNet) was created by querying emotion-related keywords from different search engines. AffectNet contains more than 1M images with faces and 440,000 manually annotated images with facial expressions, valence, and arousal. Two DNNs were trained on AffectNet to classify the facial expression images and predict the value of valence and arousal. Various evaluation metrics show that our deep neural network approaches trained on AffectNet can perform better than conventional machine learning methods and available off-the-shelf FER systems. We then integrated this automated FER system into spoken dialog of our robotic platform to extend and enrich the capabilities of ExpressionBot beyond spoken dialog and create an affect-aware robotic agent that can measure and infer users\u27 affect and cognition. Three social/interaction aspects (task engagement, being empathic, and likability of the robot) are measured in an experiment with the affect-aware robotic agent. The results indicate that users rated our affect-aware agent as empathic and likable as a robot in which user\u27s affect is recognized by a human (WoZ). In summary, this dissertation presents the development and HRI studies of a perceptive, and expressive, conversational, rear-projected, life-like robotic agent (aka ExpressionBot or Ryan) that models natural face-to-face communication between human and emapthic agent. The results of our in-depth human-robot-interaction studies show that this robotic agent can serve as a model for creating the next generation of empathic social robots

    Human-Machine Communication: Complete Volume 5. Gender and Human-Machine Communication

    Get PDF
    This is the complete volume of HMC Volume

    Quantitative Framework For Social Cultural Interactions

    Get PDF
    For an autonomous robot or software agent to participate in the social life of humans, it must have a way to perform a calculus of social behavior. Such a calculus must have explanatory power (it must provide a coherent theory for why the humans act the way they do), and predictive power (it must provide some plausible events from the predicted future actions of the humans). This dissertation describes a series of contributions that would allow agents observing or interacting with humans to perform a calculus of social behavior taking into account cultural conventions and socially acceptable behavior models. We discuss the formal components of the model: culture-sanctioned social metrics (CSSMs), concrete beliefs (CBs) and action impact functions. Through a detailed case study of a crooked seller who relies on the manipulation of public perception, we show that the model explains how the exploitation of social conventions allows the seller to finalize transactions, despite the fact that the clients know that they are being cheated. In a separate study, we show that how the crooked seller can find an optimal strategy with the use of reinforcement learning. We extend the CSSM model for modeling the propagation of public perception across multiple social interactions. We model the evolution of the public perception both over a single interaction and during a series of interactions over an extended period of time. An important aspect for modeling the public perception is its propagation - how the propagation is affected by the spatio-temporal context of the interaction and how does the short-term and long-term memory of humans affect the overall public perception. We validated the CSSM model through a user study in which participants cognizant with the modeled culture had to evaluate the impact on the social values. The scenarios used in the experiments modeled emotionally charged social situations in a cross-cultural setting and with the presence of a robot. The scenarios model conflicts of cross-cultural communication as well as ethical, social and financial choices. This study allowed us to study whether people sharing the same culture evaluate CSSMs at the same way (the inter-cultural uniformity conjecture). By presenting a wide range of possible metrics, the study also allowed us to determine whether any given metric can be considered a CSSM in a given culture or not

    Investigating the influence of situations and expectations on user behavior : empirical analyses in human-robot interaction

    Get PDF
    Lohse M. Investigating the influence of situations and expectations on user behavior : empirical analyses in human-robot interaction. Bielefeld (Germany): Bielefeld University; 2010.Social sciences are becoming increasingly important for robotics research as work goes on to enable service robots to interact with inexperienced users. This endeavor can only be successful if the robots learn to interpret the users' behavior reliably and, in turn, provide feedback for the users, which enables them to understand the robot. In order to achieve this goal, the thesis introduces an approach to describe the interaction situation as a dynamic construct with different levels of specificity. The situation concept is the starting point for a model which aims to explain the users' behavior. The second important component of the model is the expectations of the users with respect to the robot. Both the situation and the expectations are shown to be the main determinants of the users' behaviors. With this theoretical background in mind, the thesis examines interactions from a home tour scenario in which a human teaches a robot about rooms and objects within them. To analyze the human expectations and behaviors in this situation, two main novel methods have been developed. In particular, a quantitative method for the analysis of the users' behavior repertoires (speech, gesture, eye gaze, body orientation, etc.) is introduced. The approach focuses on the interaction level, which describes the interplay between the robot and the user. In the second novel method, also the system level is taken into account, which includes the robot components and their interplay. This method serves for a detailed task analysis and helps to identify problems that occur in the interaction. By applying these methods, the thesis contributes to the identification of underlying expectations that allow future behavior of the users to be predicted in particular situations. Knowledge about the users' behavior repertoires serves as a cue for the robot about the state of the interaction and the task the users aim to accomplish. Therefore, it enables robot developers to adapt the interaction models of the components to the situation, actual user expectations, and behaviors. The work provides a deeper understanding of the role of expectations in human-robot interaction and contributes to the interaction and system design of interactive robots
    corecore