40 research outputs found

    Recognizing Engagement Behaviors in Human-Robot Interaction

    Get PDF
    Based on analysis of human-human interactions, we have developed an initial model of engagement for human-robot interaction which includes the concept of connection events, consisting of: directed gaze, mutual facial gaze, conversational adjacency pairs, and backchannels. We implemented the model in the open source Robot Operating System and conducted a human-robot interaction experiment to evaluate it

    Robots and autistic children: a review

    Get PDF
    In accordance with the advancement in robotics and the scholarly literature, the extents of utilizing robots for autistic children are widened and could be a promising method for individual with Autism Spectrum Disorder (ASD) treatments, where the different form of robot (humanoid, non-humanoid, animal-like, toy, and kits) can be employed effectively as a support tool to augment the learning skills and rehabilitate of the individual with Autism Spectrum Disorder (ASD). Thus, the robots were exploited for ASD children in different aspects namely; modelling, teaching, and skills practicing; testing, highlighting and evaluating; providing feedback or encouragement; join Attention; eliciting social behaviours; emotion recognition and expression; imitation; vocalization; turn-taking; and diagnostic. The related literature published recently in journals and conferences is taken into account. In this paper, we review the use of robots that help in the therapy of individuals with Autism Spectrum Disorder (ASD). The articles on using robots for autistic children rehabilitation and education which reported results of experiments on a number of participants were implicated. After looking in digital libraries under this criteria, and excluding non-related, and duplicated studies, 39 studies have been found. The findings were focused mainly on the social communication skills of autistic children and how the extent of the robots mitigate their stereotyped behaviours. Deeper research is required in this area to cover all applications of robotic on autistic children in order to design feasible and low-cost robots that ensure provide high validity

    Generating Engagement Behaviors in Human-Robot Interaction

    Get PDF
    Based on a study of the engagement process between humans, I have developed models for four types of connection events involving gesture and speech: directed gaze, mutual facial gaze, adjacency pairs and backchannels. I have developed and validated a reusable Robot Operating System (ROS) module that supports engagement between a human and a humanoid robot by generating appropriate connection events. The module implements policies for adding gaze and pointing gestures to referring phrases (including deictic and anaphoric references), performing end-of-turn gazes, responding to human-initiated connection events and maintaining engagement. The module also provides an abstract interface for receiving information from a collaboration manager using the Behavior Markup Language (BML) and exchanges information with a previously developed engagement recognition module. This thesis also describes a Behavior Markup Language (BML) realizer that has been developed for use in robotic applications. Instead of the existing fixed-timing algorithms used with virtual agents, this realizer uses an event-driven architecture, based on Petri nets, to ensure each behavior is synchronized in the presence of unpredictable variability in robot motor systems. The implementation is robot independent, open-source and uses the Robot Operating System (ROS)

    Development of a Social Robot as a Mediator for Intergenerational Gameplay & Development of a Canvas for the Conceptualisation of HRI Game Design

    Get PDF
    Intergenerational interaction between grandparents and grandchildren benefits both generations. The use of a social robot in mediating this interaction is a relatively unexplored area of research. Often Human-Robot Interaction (HRI) research uses the robot as a point of focus; this thesis puts the focus on the interaction between the generations, using a multi-stage study with a robot mediating the interaction in dyads of grandparents and grandchildren. The research questions guiding this thesis are: 1) How might a robot-mediated game be used to foster intergenerational gameplay? 2) What template can be created to conceptually describe HRI game systems? To answer the first question, the study design includes three stages: 1. Human mediator Stage (exploratory); 2. The Wizard-of-Oz (WoZ) Stage (where a researcher remotely controls the robot); 3. Fully/semi-autonomous Stage. A Tangram puzzle game was used to create an enjoyable, collaborative experience. Stage 1 of the study was conducted with four dyads of grandparents (52-74 years of age) and their grandchildren (7-9 years of age). The purpose of Stage 1 was to determine the following: 1. How do dyads of grandparent-grandchild perceive their collaboration in the Tangram game? 2. What role do the dyads envision for a social robot in the game? Results showed the dyads perceived high collaboration in the Tangram game, and saw the role of the robot as helping them by providing clues in the gameplay. The research team felt the game, in conjunction with the proposed setup, worked well for supporting collaboration and decided to use the same game with a similar setup for the next two stages. Although the design and development of the next stage were ready, the COVID-19 pandemic led to the suspension of in-person research. The second part of this thesis research focused on creating the Human-Robot Interaction Game Canvas (HRIGC), a novel way to conceptually model HRI game systems. A literature search of systematic ways to capture information, to assist in the design of the multi-stage study, yielded no appropriate tool, and prompted the creation of the HRIGC. The goal of the HRIGC is to help researchers think about, identify, and explore various aspects of designing an HRI game-based system. During the development process, the HRIGC was put through three case studies and two test runs: 1) Test run 1 with three researchers in HRI game design; 2) Test run 2 with four Human-Computer Interaction (HCI) researchers of different backgrounds. The case studies and test runs showed HRIGC to be a promising tool in articulating the key aspects of HRI game design in an intuitive manner. Formal validation of the canvas is necessary to confirm this tool

    Incorporating a humanoid robot to motivate the geometric figures learning

    Get PDF
    Technology has been introduced into educational environments to facilitate learning and engage the students interest. Robotics can be an interesting alternative to explore theoretical concepts covered in class. In this paper, a computational system capable of detecting objects was incorporated into the robot NAO, so it can Interact with students, recognizing geometric shapes with overlap. The system consists of two models of neural networks and was evaluated through a sequence of didatic activities presented to students of the 5th year, aiming to encourage them to perform the tasks. The robot operates autonomously, recognizing and counting the diferente objects in the image. The results show that the children felt very motivated and engaged to fulfill the tasks.São Paulo State Research Foundation (FAPESP)Brazilian National Research Council (CNPq

    An Intelligent Robot and Augmented Reality Instruction System

    Get PDF
    Human-Centered Robotics (HCR) is a research area that focuses on how robots can empower people to live safer, simpler, and more independent lives. In this dissertation, I present a combination of two technologies to deliver human-centric solutions to an important population. The first nascent area that I investigate is the creation of an Intelligent Robot Instructor (IRI) as a learning and instruction tool for human pupils. The second technology is the use of augmented reality (AR) to create an Augmented Reality Instruction (ARI) system to provide instruction via a wearable interface. To function in an intelligent and context-aware manner, both systems require the ability to reason about their perception of the environment and make appropriate decisions. In this work, I construct a novel formulation of several education methodologies, particularly those known as response prompting, as part of a cognitive framework to create a system for intelligent instruction, and compare these methodologies in the context of intelligent decision making using both technologies. The IRI system is demonstrated through experiments with a humanoid robot that uses object recognition and localization for perception and interacts with students through speech, gestures, and object interaction. The ARI system uses augmented reality, computer vision, and machine learning methods to create an intelligent, contextually aware instructional system. By using AR to teach prerequisite skills that lend themselves well to visual, augmented reality instruction prior to a robot instructor teaching skills that lend themselves to embodied interaction, I am able to demonstrate the potential of each system independently as well as in combination to facilitate students\u27 learning. I identify people with intellectual and developmental disabilities (I/DD) as a particularly significant use case and show that IRI and ARI systems can help fulfill the compelling need to develop tools and strategies for people with I/DD. I present results that demonstrate both systems can be used independently by students with I/DD to quickly and easily acquire the skills required for performance of relevant vocational tasks. This is the first successful real-world application of response-prompting for decision making in a robotic and augmented reality intelligent instruction system

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies
    corecore