591 research outputs found

    Modeling and Controlling Friendliness for An Interactive Museum Robot

    Full text link

    The Philosophical Case for Robot Friendship

    Get PDF
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered our virtue friends - that to do so is philosophically reasonable. Furthermore, I argue that even if you do not think that robots can be our virtue friends, they can fulfil other important friendship roles, and can complement and enhance the virtue friendships between human beings

    移動ロボットの為の意識モデルと顔記憶機能に基づくCPU・バッテリの有効利用と親近感の向上

    Get PDF
    筑波大学修士(情報学)学位論文・平成31年3月25日授与(41292号

    The perception of a robot partner’s effort elicits a sense of commitment to human-robot interaction

    Get PDF
    Previous research has shown that the perception that one’s partner is investing effort in a joint action can generate a sense of commitment, leading participants to persist longer despite increasing boredom. The current research extends this finding to human-robot interaction. We implemented a 2-player version of the classic snake game which became increasingly boring over the course of each round, and operationalized commitment in terms of how long participants persisted before pressing a ‘finish’ button to conclude each round. Participants were informed that they would be linked via internet with their partner, a humanoid robot. Our results reveal that participants persisted longer when they perceived what they believed to be cues of their robot partner’s effortful contribution to the joint action. This provides evidence that the perception of a robot partner’s effort can elicit a sense of commitment to human-robot interaction

    Social Perception of Pedestrians and Virtual Agents Using Movement Features

    Get PDF
    In many tasks such as navigation in a shared space, humans explicitly or implicitly estimate social information related to the emotions, dominance, and friendliness of other humans around them. This social perception is critical in predicting others’ motions or actions and deciding how to interact with them. Therefore, modeling social perception is an important problem for robotics, autonomous vehicle navigation, and VR and AR applications. In this thesis, we present novel, data-driven models for the social perception of pedestrians and virtual agents based on their movement cues, including gaits, gestures, gazing, and trajectories. We use deep learning techniques (e.g., LSTMs) along with biomechanics to compute the gait features and combine them with local motion models to compute the trajectory features. Furthermore, we compute the gesture and gaze representations using psychological characteristics. We describe novel mappings between these computed gaits, gestures, gazing, and trajectory features and the various components (emotions, dominance, friendliness, approachability, and deception) of social perception. Our resulting data-driven models can identify the dominance, deception, and emotion of pedestrians from videos with an accuracy of more than 80%. We also release new datasets to evaluate these methods. We apply our data-driven models to socially-aware robot navigation and the navigation of autonomous vehicles among pedestrians. Our method generates robot movement based on pedestrians’ dominance levels, resulting in higher rapport and comfort. We also apply our data-driven models to simulate virtual agents with desired emotions, dominance, and friendliness. We perform user studies and show that our data-driven models significantly increase the user’s sense of social presence in VR and AR environments compared to the baseline methods.Doctor of Philosoph

    A user experience analysis for a mobile Mixed Reality application for cultural heritage

    Get PDF
    Mixed Reality has emerged as a valuable tool for the promotion of cultural heritage. In this context, in particular, the metaphor of virtual portals allows the virtual visit of monuments that are inaccessible or no longer exist in their original form, integrating them into the real environment. This paper presents the development of a Mixed Reality mobile application that proposes a virtual reconstruction of the church of Sant’Elia in Ruggiano, in the southern province of Lecce (Italy). By placing the virtual portal in the same place where the entrance of the church was located, the user can cross this threshold to enter inside and make a virtual journey into the past. The user experience was evaluated by administering a questionnaire to 60 users who tried the application. From the data collected, four user experience factors were identified (interest, focus of attention, presence and usability), which were compared between young and old, male and female users, and between users who had already visited the church in person and all other users. In general, the scores reveal a total independence of the other three factors from usability and a very high level of interest

    An Intelligent Robot and Augmented Reality Instruction System

    Get PDF
    Human-Centered Robotics (HCR) is a research area that focuses on how robots can empower people to live safer, simpler, and more independent lives. In this dissertation, I present a combination of two technologies to deliver human-centric solutions to an important population. The first nascent area that I investigate is the creation of an Intelligent Robot Instructor (IRI) as a learning and instruction tool for human pupils. The second technology is the use of augmented reality (AR) to create an Augmented Reality Instruction (ARI) system to provide instruction via a wearable interface. To function in an intelligent and context-aware manner, both systems require the ability to reason about their perception of the environment and make appropriate decisions. In this work, I construct a novel formulation of several education methodologies, particularly those known as response prompting, as part of a cognitive framework to create a system for intelligent instruction, and compare these methodologies in the context of intelligent decision making using both technologies. The IRI system is demonstrated through experiments with a humanoid robot that uses object recognition and localization for perception and interacts with students through speech, gestures, and object interaction. The ARI system uses augmented reality, computer vision, and machine learning methods to create an intelligent, contextually aware instructional system. By using AR to teach prerequisite skills that lend themselves well to visual, augmented reality instruction prior to a robot instructor teaching skills that lend themselves to embodied interaction, I am able to demonstrate the potential of each system independently as well as in combination to facilitate students\u27 learning. I identify people with intellectual and developmental disabilities (I/DD) as a particularly significant use case and show that IRI and ARI systems can help fulfill the compelling need to develop tools and strategies for people with I/DD. I present results that demonstrate both systems can be used independently by students with I/DD to quickly and easily acquire the skills required for performance of relevant vocational tasks. This is the first successful real-world application of response-prompting for decision making in a robotic and augmented reality intelligent instruction system

    Personalizing Human-Robot Dialogue Interactions using Face and Name Recognition

    Get PDF
    Task-oriented dialogue systems are computer systems that aim to provide an interaction indistinguishable from ordinary human conversation with the goal of completing user- defined tasks. They are achieving this by analyzing the intents of users and choosing respective responses. Recent studies show that by personalizing the conversations with this systems one can positevely affect their perception and long-term acceptance. Personalised social robots have been widely applied in different fields to provide assistance. In this thesis we are working on development of a scientific conference assistant. The goal of this assistant is to provide the conference participants with conference information and inform about the activities for their spare time during conference. Moreover, to increase the engagement with the robot our team has worked on personalizing the human-robot interaction by means of face and name recognition. To achieve this personalisation, first the name recognition ability of available physical robot was improved, next by the concent of the participants their pictures were taken and used for memorization of returning users. As acquiring the consent for personal data storage is not an optimal solution, an alternative method for participants recognition using QR Codes on their badges was developed and compared to pre-trained model in terms of speed. Lastly, the personal details of each participant, as unviversity, country of origin, was acquired prior to conference or during the conversation and used in dialogues. The developed robot, called DAGFINN was displayed at two conferences happened this year in Stavanger, where the first time installment did not involve personalization feature. Hence, we conclude this thesis by discussing the influence of personalisation on dialogues with the robot and participants satisfaction with developed social robot
    corecore