5 research outputs found

    Non-invasive, non-contact based affective state identification

    Get PDF
    This paper discusses a study on detecting affective states of human subjects from their body’s electromagnetic (EM) wave. In particular, the affective states under investigation are happy, nervous, and sad which play important roles in Human-Robot Interaction (HRI)applications. A structured experimental setup was designed to invoke the desired affective states. These states are induced by exposing the subject to a specific set of audiovisual stimulations upon which the EM waves are captured from ten different regions of the subject’s body by using a handheld device called Resonant Field Imaging (RFITM). Nine subjects are randomly chosen and the collected data are then preprocessed and trained by Bayesian Network (BN) to map the EM wave to the corresponding affective states. Preliminary results demonstrate the ability of the BN to predict human affective state with 80.6% precision, and 90% accuracy

    Semiotics and Human-Robot Interaction

    Get PDF
    Keywords: Semi-autonomous robot, human-robot interaction, semiotics. Abstract: This paper describes a robot control architecture supported on a human-robot interaction model obtained directly from semiotics concepts. The architecture is composed of a set of objects defined after a semiotic sign model. Simulation experiments using unicycle robots are presented that illustrate the interactions within a team of robots equipped with skills similar to those used in human-robot interactions.

    Semiotics and Human-Robot Interaction

    Get PDF

    Use of Vocal Prosody to Express Emotions in Robotic Speech

    Get PDF
    Vocal prosody (pitch, timing, loudness, etc.) and its use to convey emotions are essential components of speech communication between humans. The objective of this dissertation research was to determine the efficacy of using varying vocal prosody in robotic speech to convey emotion. Two pilot studies and two experiments were performed to address the shortcomings of previous HRI research in this area. The pilot studies were used to determine a set of vocal prosody modification values for a female voice model using the MARY speech synthesizer to convey the emotions: anger, fear, happiness, and sadness. Experiment 1 validated that participants perceived these emotions along with a neutral vocal prosody at rates significantly higher than chance. Four of the vocal prosodies (anger, fear, neutral, and sadness) were recognized at rates approaching the recognition rate (60%) of emotions in person to person speech. During Experiment 2 the robot led participants through a creativity test while making statements using one of the validated emotional vocal prosodies. The ratings of the robot’s positive qualities and the creativity scores by the participant group that heard nonnegative vocal prosodies (happiness, neutral) did not significantly differ from the ratings and scores of the participant group that heard the negative vocal prosodies (anger, fear, sadness). Therefore, Experiment 2 failed to show that the use of emotional vocal prosody in a robot’s speech influenced the participants’ appraisal of the robot or the participants’ performance on this specific task. At this time robot designers and programmers should not expect that vocal prosody alone will have a significant impact on the acceptability or the quality of human-robot interactions. Further research is required to show that multi-modal (vocal prosody along with facial expressions, body language, or linguistic content) expressions of emotions by robots will be effective at improving human-robot interactions
    corecore