1,048 research outputs found

    Facial Expression Simulator Control System using OpenGL Graphical Simulator

    Get PDF
    Verbal communication is communication that uses the language or voice interaction, whereas nonverbal communication is communication that interacts with one of these gestures is to show facial expressions. We propose of the implementation of controls based on the facial expressions of human face. Based on obtained the information expression then translated into a device simulator using the OpenGL graphics software as an indication tool and easy for analyzing the emotional character of a person through a computer device. In the implementation of the face was found that the mechanism of the humanoid robot head in nonverbal interaction has 8 DOF (Degree of Freedom) from the various combinations between the drive motor servo eyebrows, eyes, eyelids, and mouth in order to show the anger, disgust, happiness, surprise, sadness, and fear facial expression

    Speech-Gesture Mapping and Engagement Evaluation in Human Robot Interaction

    Full text link
    A robot needs contextual awareness, effective speech production and complementing non-verbal gestures for successful communication in society. In this paper, we present our end-to-end system that tries to enhance the effectiveness of non-verbal gestures. For achieving this, we identified prominently used gestures in performances by TED speakers and mapped them to their corresponding speech context and modulated speech based upon the attention of the listener. The proposed method utilized Convolutional Pose Machine [4] to detect the human gesture. Dominant gestures of TED speakers were used for learning the gesture-to-speech mapping. The speeches by them were used for training the model. We also evaluated the engagement of the robot with people by conducting a social survey. The effectiveness of the performance was monitored by the robot and it self-improvised its speech pattern on the basis of the attention level of the audience, which was calculated using visual feedback from the camera. The effectiveness of interaction as well as the decisions made during improvisation was further evaluated based on the head-pose detection and interaction survey.Comment: 8 pages, 9 figures, Under review in IRC 201

    Robot-Mediated Interviews with Children : What do potential users think?

    Get PDF
    Luke Wood, Hagen Lehmann, Kerstin Dautenhahn, Ben Robins, Austen Rayner, and Dag Syrdal, ‘Robot-Mediated Interviews with Children: What do potential users think?’, paper presented at the 50th Annual Convention of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour, 1 April 2014 – 4 April 2014, London, UK.When police officers are conducting interviews with children, some of the disclosures can be quite shocking. This can make it difficult for an officer to maintain their composure without subtly indicating their shock to the child, which can in turn impede the information acquisition process. Using a robotic interviewer could eliminate this problem as the behaviours and expressions of the robot can be consciously controlled. To date research investigating the potential of Robot-Mediated Interviews has focused on establishing whether children will respond to robots in an interview scenario and if so how well. The results of these studies indicate that children will talk to a robot in an interview scenario in a similar way to which they talk to a human interviewer. However, in order to test if this approach would work in a real world setting, it is important to establish what the experts (e.g. specialist child interviewers) would require from the system. To determine the needs of the users we conducted a user panel with a group of potential real world users to gather their views of our current system and find out what they would require for the system to be useful to them. The user group we worked with consisted of specialist child protection police officers based in the UK. The findings from this panel suggest that a Robot-Mediated Interviewing system would need to be more flexible than our current system in order to respond to unpredictable situations and paths of investigation. This paper gives an insight into what real world users would need from a Robot-Mediated Interviewing system

    Persuasiveness of social robot ‘Nao’ based on gaze and proximity

    Get PDF
    Social Robots have widely infiltrated the retail and public space. Mainly, social robots are being utilized across a wide range of scenarios to influence decision making, disseminate information, and act as a signage mechanism, under the umbrella of Persuasive Robots or Persuasive Technology. While there have been several studies in the afore-mentioned area, the effect of non-verbal behaviour on persuasive abilities is generally unexplored. Therefore, in this research, we report whether two key non-verbal attributes, namely proximity and gaze, can elicit persuasively, compliance, and specific personality appeals. For this, we conducted a 2 (eye gaze) x 2 (proximity) between-subjects experiment where participants viewed a video-based scenario of the Nao robot. Our initial results did not reveal any significant results based on the non-verbal attributes. However, perceived compliance and persuasion were significantly correlated with knowledge, responsiveness, and trustworthiness. In conclusion, we discuss how the design of a robot could make it more convincing as extensive marketing and brand promotion companies could use robots to enhance their advertisement operations

    Humanoid Robot handling Hand-Signs Recognition

    Get PDF
    Recent advancements in human-robot interaction have led to tremendous improvement for humanoid robots but still lacks social acceptance among people. Though verbal communication is the primary means of human-robot interaction, non-verbal communication that is proven to be an integral part of the human interactions is not widely used in humanoid robots. This thesis aims to achieve human-robot interaction via non-verbal communication, especially using hand-signs. It presents a prototype system that simulates hand-signs recognition in the NAO humanoid robot, and further an online questionnaire is used to examine people's opinion on the use of non-verbal communication to interact with a humanoid robot. The positive results derived from the study indicates people's willingness to use non-verbal communication as a means to communicate with humanoid robots, thus encouraging robot designers to use non-verbal communications for enhancing human-robot interaction

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents
    corecore