1,999 research outputs found

    Humanoid Theory Grounding

    Get PDF
    In this paper we consider the importance of using a humanoid physical form for a certain proposed kind of robotics, that of theory grounding. Theory grounding involves grounding the theory skills and knowledge of an embodied artificially intelligent (AI) system by developing theory skills and knowledge from the bottom up. Theory grounding can potentially occur in a variety of domains, and the particular domain considered here is that of language. Language is taken to be another “problem space” in which a system can explore and discover solutions. We argue that because theory grounding necessitates robots experiencing domain information, certain behavioral-form aspects, such as abilities to socially smile, point, follow gaze, and generate manual gestures, are necessary for robots grounding a humanoid theory of language

    Automatic Context-Driven Inference of Engagement in HMI: A Survey

    Full text link
    An integral part of seamless human-human communication is engagement, the process by which two or more participants establish, maintain, and end their perceived connection. Therefore, to develop successful human-centered human-machine interaction applications, automatic engagement inference is one of the tasks required to achieve engaging interactions between humans and machines, and to make machines attuned to their users, hence enhancing user satisfaction and technology acceptance. Several factors contribute to engagement state inference, which include the interaction context and interactants' behaviours and identity. Indeed, engagement is a multi-faceted and multi-modal construct that requires high accuracy in the analysis and interpretation of contextual, verbal and non-verbal cues. Thus, the development of an automated and intelligent system that accomplishes this task has been proven to be challenging so far. This paper presents a comprehensive survey on previous work in engagement inference for human-machine interaction, entailing interdisciplinary definition, engagement components and factors, publicly available datasets, ground truth assessment, and most commonly used features and methods, serving as a guide for the development of future human-machine interaction interfaces with reliable context-aware engagement inference capability. An in-depth review across embodied and disembodied interaction modes, and an emphasis on the interaction context of which engagement perception modules are integrated sets apart the presented survey from existing surveys

    Embodied & Situated Language Processing

    Get PDF

    Trusting in Machines: How Mode of Interaction Affects Willingness to Share Personal Information with Machines

    Get PDF
    Every day, people make decisions about whether to trust machines with their personal information, such as letting a phone track one’s location. How do people decide whether to trust a machine? In a field experiment, we tested how two modes of interaction-”expression modality, whether the person is talking or typing to a machine, and response modality, whether the machine is talking or typing back-”influence the willingness to trust a machine. Based on research that expressing oneself verbally reduces self-control compared to nonverbal expression, we predicted that talking to a machine might make people more willing to share their personal information. Based on research on the link between anthropomorphism and trust, we further predicted that machines who talked (versus texted) would seem more human-like and be trusted more. Using a popular chatterbot phone application, we randomly assigned over 300 community members to either talk or type to the phone, which either talked or typed in return. We then measured how much participants anthropomorphized the machine and their willingness to share their personal information (e.g., their location, credit card information) with it. Results revealed that talking made people more willing to share their personal information than texting, and this was robust to participants’ self-reported comfort with technology, age, gender, and conversation characteristics. But listening to the application’s voice did not affect anthropomorphism or trust compared to reading its text. We conclude by considering the theoretical and practical implications of this experiment for understanding how people trust machines
    • …
    corecore