1,601 research outputs found

    Communicative act development

    Get PDF
    How do children learn to map linguistic forms onto their intended meanings? This chapter begins with an introduction to some theoretical and analytical tools used to study communicative acts. It then turns to communicative act development in spoken and signed language acquisition, including both the early scaffolding and production of communicative acts (both non-verbal and verbal) as well as their later links to linguistic development and Theory of Mind. The chapter wraps up by linking research on communicative act development to the acquisition of conversational skills, cross-linguistic and individual differences in communicative experience during development, and human evolution. Along the way, it also poses a few open questions for future research in this domain

    The Interaction Engine: Cuteness selection and the evolution of the interactional base for language

    Get PDF
    The deep structural diversity of languages suggests that our language capacities are not based on any single template but rather on an underlying ability and motivation for infants to acquire a culturally transmitted system. The hypothesis is that this ability has an interactional base that has discernable precursors in other primates. In this paper I explore a specific evolutionary route for the most puzzling aspect of this interactional base in humans, namely the development of an empathetic intentional stance. The route involves a generalization of mother-infant interaction patterns to all adults via a process (‘ cuteness selection’ ) analogous to, but distinct from, RA Fisher’s runaway sexual selection. This provides a cornerstone for the carrying capacity for language

    Look Who's Talking: Pre-Verbal Infants’ Perception of Face-to-Face and Back-to-Back Social Interactions

    Get PDF
    Four-, 6-, and 11-month old infants were presented with movies in which two adult actors conversed about everyday events, either by facing each other or looking in opposite directions. Infants from 6 months of age made more gaze shifts between the actors, in accordance with the flow of conversation, when the actors were facing each other. A second experiment demonstrated that gaze following alone did not cause this difference. Instead the results are consistent with a social cognitive interpretation, suggesting that infants perceive the difference between face-to-face and back-to-back conversations and that they prefer to attend to a typical pattern of social interaction from 6 months of age

    Towards automatic estimation of conversation floors within F-formations

    Full text link
    The detection of free-standing conversing groups has received significant attention in recent years. In the absence of a formal definition, most studies operationalize the notion of a conversation group either through a spatial or a temporal lens. Spatially, the most commonly used representation is the F-formation, defined by social scientists as the configuration in which people arrange themselves to sustain an interaction. However, the use of this representation is often accompanied with the simplifying assumption that a single conversation occurs within an F-formation. Temporally, various categories have been used to organize conversational units; these include, among others, turn, topic, and floor. Some of these concepts are hard to define objectively by themselves. The present work constitutes an initial exploration into unifying these perspectives by primarily posing the question: can we use the observation of simultaneous speaker turns to infer whether multiple conversation floors exist within an F-formation? We motivate a metric for the existence of distinct conversation floors based on simultaneous speaker turns, and provide an analysis using this metric to characterize conversations across F-formations of varying cardinality. We contribute two key findings: firstly, at the average speaking turn duration of about two seconds for humans, there is evidence for the existence of multiple floors within an F-formation; and secondly, an increase in the cardinality of an F-formation correlates with a decrease in duration of simultaneous speaking turns.Comment: 8th International Conference on Affective Computing & Intelligent Interaction EMERGent Workshop, 7 pages, 4 Figures, 2 Table

    The Look of Fear from the Eyes Varies with the Dynamic Sequence of Facial Actions

    Get PDF
    Most research on the ability to interpret expressions from the eyes has utilized static information. This research investigates whether the dynamic sequence of facial actions in the eye region influences the judgments of perceivers. Dynamic fear expressions involving the eye region and eyebrows were created which systematically differed in the sequential occurrence of facial actions. Participants rated the intensity of sequential fear expressions, either in addition to a simultaneous, full-blown expression (Experiment 1) or in combination with different levels of eye gaze (Experiment 2). The results showed that the degree of attributed emotion and the appraisal ratings differed as a function of the sequence of facial expressions of fear, with direct gaze resulting in stronger subjective responses. The findings challenge current notions surrounding the study of static facial displays from the eyes and suggest that emotion perception is a dynamic process shaped by the time course of the facial actions of an expression. Possible implications for the field of affective computing and clinical research are discussed

    Factive and nonfactive mental state attribution

    Get PDF
    Factive mental states, such as knowing or being aware, can only link an agent to the truth; by contrast, nonfactive states, such as believing or thinking, can link an agent to either truths or falsehoods. Researchers of mental state attribution often draw a sharp line between the capacity to attribute accurate states of mind and the capacity to attribute inaccurate or “reality-incongruent” states of mind, such as false belief. This article argues that the contrast that really matters for mental state attribution does not divide accurate from inaccurate states, but factive from nonfactive ones

    Spoken Language Interaction with Robots: Recommendations for Future Research

    Get PDF
    With robotics rapidly advancing, more effective human–robot interaction is increasingly needed to realize the full potential of robots for society. While spoken language must be part of the solution, our ability to provide spoken language interaction capabilities is still very limited. In this article, based on the report of an interdisciplinary workshop convened by the National Science Foundation, we identify key scientific and engineering advances needed to enable effective spoken language interaction with robotics. We make 25 recommendations, involving eight general themes: putting human needs first, better modeling the social and interactive aspects of language, improving robustness, creating new methods for rapid adaptation, better integrating speech and language with other communication modalities, giving speech and language components access to rich representations of the robot’s current knowledge and state, making all components operate in real time, and improving research infrastructure and resources. Research and development that prioritizes these topics will, we believe, provide a solid foundation for the creation of speech-capable robots that are easy and effective for humans to work with

    Socially aware conversational agents

    Get PDF
    • 

    corecore