5,246 research outputs found

    Classifying referential and non-referential it using gaze

    Get PDF
    When processing a text, humans and machines must disambiguate between different uses of the pronoun it, including non-referential, nominal anaphoric or clause anaphoric ones. In this paper, we use eye-tracking data to learn how humans perform this disambiguation. We use this knowledge to improve the automatic classification of it. We show that by using gaze data and a POS-tagger we are able to significantly outperform a common baseline and classify between three categories of it with an accuracy comparable to that of linguisticbased approaches. In addition, the discriminatory power of specific gaze features informs the way humans process the pronoun, which, to the best of our knowledge, has not been explored using data from a natural reading task

    Classifying types of gesture and inferring intent

    Get PDF
    In order to infer intent from gesture, a rudimentary classification of types of gestures into five main classes is introduced. The classification is intended as a basis for incorporating the understanding of gesture into human-robot interaction (HRI). Some requirements for the operational classification of gesture by a robot interacting with humans are also suggested

    Self-Supervised Vision-Based Detection of the Active Speaker as Support for Socially-Aware Language Acquisition

    Full text link
    This paper presents a self-supervised method for visual detection of the active speaker in a multi-person spoken interaction scenario. Active speaker detection is a fundamental prerequisite for any artificial cognitive system attempting to acquire language in social settings. The proposed method is intended to complement the acoustic detection of the active speaker, thus improving the system robustness in noisy conditions. The method can detect an arbitrary number of possibly overlapping active speakers based exclusively on visual information about their face. Furthermore, the method does not rely on external annotations, thus complying with cognitive development. Instead, the method uses information from the auditory modality to support learning in the visual domain. This paper reports an extensive evaluation of the proposed method using a large multi-person face-to-face interaction dataset. The results show good performance in a speaker dependent setting. However, in a speaker independent setting the proposed method yields a significantly lower performance. We believe that the proposed method represents an essential component of any artificial cognitive system or robotic platform engaging in social interactions.Comment: 10 pages, IEEE Transactions on Cognitive and Developmental System

    Communicative Robot Signals: Presenting a New Typology for Human-Robot Interaction

    Get PDF
    © 2023 The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/We present a new typology for classifying signals from robots when they communicate with humans. For inspiration, we use ethology, the study of animal behaviour and previous efforts from literature as guides in defining the typology. The typology is based on communicative signals that consist of five properties: the origin where the signal comes from, the deliberateness of the signal, the signal's reference, the genuineness of the signal, and its clarity (i.e. how implicit or explicit it is). Using the accompanying worksheet, the typology is straightforward to use to examine communicative signals from previous human-robot interactions and provides guidance for designers to use the typology when designing new robot behaviours

    Putting the “Joy” in joint attention: affective-gestural synchrony by parents who point for their babies

    Get PDF
    Despite a growing body of work examining the expression of infants’ positive emotion in joint attention contexts, few studies have examined the moment-by-moment dynamics of emotional signaling by adults interacting with babies in these contexts. We invited 73 parents of infants (three fathers) to our laboratory, comprising parent-infant dyads with babies at 6 (n = 15), 9 (n = 15), 12 (n = 15), 15 (n = 14), and 18 (n = 14) months of age. Parents were asked to sit in a chair centered on the long axis of a room and to point to distant dolls (2.5 m) when the dolls were animated, while holding their children in their laps. We found that parents displayed the highest levels of smiling at the same time that they pointed, thus demonstrating affective/referential synchrony in their infant-directed communication. There were no discernable differences in this pattern among parents with children of different ages. Thus, parents spontaneously encapsulated episodes of joint attention with positive emotion

    Wolf-like or dog-like? A comparison of gazing behaviour across three dog breeds tested in their familiar environments

    Get PDF
    Human-directed gazing, a keystone in dog\u2013human communication, has been suggested to derive from both domestication and breed selection. The influence of genetic similarity to wolves and selective pressures on human-directed gazing is still under debate. Here, we used the \u2018unsolvable task\u2019 to compare Czechoslovakian Wolfdogs (CWDs, a close-to-wolf breed), German Shepherd Dogs (GSDs) and Labrador Retrievers (LRs). In the \u2018solvable task\u2019, all dogs learned to obtain the reward; however, differently from GSDs and LRs, CWDs rarely gazed at humans. In the \u2018unsolvable task\u2019, CWDs gazed significantly less towards humans compared to LRs but not to GSDs. Although all dogs were similarly motivated to explore the apparatus, CWDs and GSDs spent a larger amount of time in manipulating it compared to LRs. A clear difference emerged in gazing at the experimenter versus owner. CWDs gazed preferentially towards the experimenter (the unfamiliar subject manipulating the food), GSDs towards their owners and LRs gazed at humans independently from their level of familiarity. In conclusion, it emerges that the artificial selection operated on CWDs produced a breed more similar to ancient breeds (more wolf-like due to a less-intense artificial selection) and not very human-oriented. The next step is to clarify GSDs\u2019 behaviour and better understand the genetic role of this breed in shaping CWDs\u2019 heterospecific behaviour
    • 

    corecore