11,063 research outputs found

    Directional adposition use in English, Swedish and Finnish

    Get PDF
    Directional adpositions such as to the left of describe where a Figure is in relation to a Ground. English and Swedish directional adpositions refer to the location of a Figure in relation to a Ground, whether both are static or in motion. In contrast, the Finnish directional adpositions edellĂ€ (in front of) and jĂ€ljessĂ€ (behind) solely describe the location of a moving Figure in relation to a moving Ground (Nikanne, 2003). When using directional adpositions, a frame of reference must be assumed for interpreting the meaning of directional adpositions. For example, the meaning of to the left of in English can be based on a relative (speaker or listener based) reference frame or an intrinsic (object based) reference frame (Levinson, 1996). When a Figure and a Ground are both in motion, it is possible for a Figure to be described as being behind or in front of the Ground, even if neither have intrinsic features. As shown by Walker (in preparation), there are good reasons to assume that in the latter case a motion based reference frame is involved. This means that if Finnish speakers would use edellĂ€ (in front of) and jĂ€ljessĂ€ (behind) more frequently in situations where both the Figure and Ground are in motion, a difference in reference frame use between Finnish on one hand and English and Swedish on the other could be expected. We asked native English, Swedish and Finnish speakers’ to select adpositions from a language specific list to describe the location of a Figure relative to a Ground when both were shown to be moving on a computer screen. We were interested in any differences between Finnish, English and Swedish speakers. All languages showed a predominant use of directional spatial adpositions referring to the lexical concepts TO THE LEFT OF, TO THE RIGHT OF, ABOVE and BELOW. There were no differences between the languages in directional adpositions use or reference frame use, including reference frame use based on motion. We conclude that despite differences in the grammars of the languages involved, and potential differences in reference frame system use, the three languages investigated encode Figure location in relation to Ground location in a similar way when both are in motion. Levinson, S. C. (1996). Frames of reference and Molyneux’s question: Crosslingiuistic evidence. In P. Bloom, M.A. Peterson, L. Nadel & M.F. Garrett (Eds.) Language and Space (pp.109-170). Massachusetts: MIT Press. Nikanne, U. (2003). How Finnish postpositions see the axis system. In E. van der Zee & J. Slack (Eds.), Representing direction in language and space. Oxford, UK: Oxford University Press. Walker, C. (in preparation). Motion encoding in language, the use of spatial locatives in a motion context. Unpublished doctoral dissertation, University of Lincoln, Lincoln. United Kingdo

    Adult participation in children’s word searches: on the use of prompting, hinting, and supplying a model

    Get PDF
    Although word searching in children is very common, very little is known about how adults support children in the turns following the child’s search behaviours, an important topic because of the social, educational and clinical implications. This study characterises, in detail, teachers’ use of prompting, hinting and supplying a model. From a classroom dataset of 53 instances, several distinctive patterns emerged. A prompted completion sequence is initiated by a ‘word retrieval elicitor’ (‘fishing’) and is interpreted as a request to complete the phrase. Non-verbal prompting is accomplished through a combination of gaze and gesture and, also, as a series of prompts. Hinting supplies a verbal clue, typically via a wh-question, or by specifying the nature of the repairable. In contrast, the strategies that supply a linguistic model include both embedded and exposed corrections and offers of candidates. A sequential relationship was found between prompting, hinting and supplying a model which has implications for how clinicians and teachers can foster self-repair

    Speakers Raise their Hands and Head during Self-Repairs in Dyadic Conversations

    Get PDF
    People often encounter difficulties in building shared understanding during everyday conversation. The most common symptom of these difficulties are self-repairs, when a speaker restarts, edits or amends their utterances mid-turn. Previous work has focused on the verbal signals of self-repair, i.e. speech disfluences (filled pauses, truncated words and phrases, word substitutions or reformulations), and computational tools now exist that can automatically detect these verbal phenomena. However, face-to-face conversation also exploits rich non-verbal resources and previous research suggests that self-repairs are associated with distinct hand movement patterns. This paper extends those results by exploring head and hand movements of both speakers and listeners using two motion parameters: height (vertical position) and 3D velocity. The results show that speech sequences containing self-repairs are distinguishable from fluent ones: speakers raise their hands and head more (and move more rapidly) during self-repairs. We obtain these results by analysing data from a corpus of 13 unscripted dialogues, and we discuss how these findings could support the creation of improved cognitive artificial systems for natural human-machine and human-robot interaction

    On becoming a physicist of mind

    Get PDF
    In 1976, the German Max Planck Society established a new research enterprise in psycholinguistics, which became the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands. I was fortunate enough to be invited to direct this institute. It enabled me, with my background in visual and auditory psychophysics and the theory of formal grammars and automata, to develop a long-term chronometric endeavor to dissect the process of speaking. It led, among other work, to my book Speaking (1989) and to my research team's article in Brain and Behavioral Sciences “A Theory of Lexical Access in Speech Production” (1999). When I later became president of the Royal Netherlands Academy of Arts and Sciences, I helped initiate the Women for Science research project of the Inter Academy Council, a project chaired by my physicist sister at the National Institute of Standards and Technology. As an emeritus I published a comprehensive History of Psycholinguistics (2013). As will become clear, many people inspired and joined me in these undertakings

    The Use of Gesture in Self-Initiated Self-Repair Sequences by Persons with Non-Fluent Aphasia

    Get PDF
    This study examines the relationship between types of gestures and instances of self-initiated self-repair (SISR) used by persons with non-fluent aphasia (NFA), which is a type of aphasia characterized by stilted speech or signing (Papathanasiou et al., 2013), in interactions with clinicians. Conversation repairs in this study are assessed using the framework of Conversation Analysis (CA), which is an approach for describing, analyzing, and understanding social interaction (Sidnell, 2010). Previous linguistic studies have demonstrated a distinct preference for the use of gesture during a repair by persons with aphasia (Goodwin, 1995; Klippi, 2015; Wilkinson, 2013). This study draws more conclusive generalizations than previous studies about the types of gesture used in successful and unsuccessful SISR by persons with NFA through the use of the AphasiaBank corpus. Results show that there does not appear to be a connection between the overall frequencies of gesture used by persons with NFA during a phase of the repair mechanism as compared to other phases in the repair mechanism. Additionally, there is a slight tendency in this dataset for persons with NFA to have more successful repairs when they use gesture during the initiation and reparable portions of the repair mechanism

    Experimenting with the Gaze of a Conversational Agent

    Get PDF
    We have carried out a pilot experiment to investigate the effects of different eye gaze behaviors of a cartoon-like talking face on the quality of human-agent dialogues. We compared a version of the talking face that roughly implements some patterns of humanlike behavior with two other versions. We called this the optimal version. In one of the other versions the shifts in gaze were kept minimal and in the other version the shifts would occur randomly. The talking face has a number of restrictions. There is no speech recognition, so questions and replies have to\ud be typed in by the users of the systems. Despite this restriction we found that participants that conversed with the optimal agent appreciated the agent more than participants that conversed with the other agents. Conversations with the optimal version proceeded more efficiently. Participants needed less time to complete their task

    Children’s Multimodal Language Development

    Get PDF
    International audienceThrough constant exposure to adult input, in dialogue, children's language gradually develops into rich linguistic constructions that contain multiple cross-modal elements subtly used together for coherent communicative functions. In this chapter, we retrace children's pathways into multimodal language acquisition in a scaffolding interactional environment. We begin with the first multimodal buds children produce that contain both gestural and vocal elements and how adults' input, including reformulations and recasts, provide children with embedded model utterances they can internalize. We then show how these buds blossom into more complex constructions, focusing on the importance of creative non standard forms. Children's productions finally bloom into full multimodal intricate productions. In our last part, we focus on argument structure, Tense, Mood and Aspect and the complexification of co-verbal gestures as they are coordinated with speech

    Controlling the Gaze of Conversational Agents

    Get PDF
    We report on a pilot experiment that investigated the effects of different eye gaze behaviours of a cartoon-like talking face on the quality of human-agent dialogues. We compared a version of the talking face that roughly implements some patterns of human-like behaviour with\ud two other versions. In one of the other versions the shifts in gaze were kept minimal and in the other version the shifts would occur randomly. The talking face has a number of restrictions. There is no speech recognition, so questions and replies have to be typed in by the users\ud of the systems. Despite this restriction we found that participants that conversed with the agent that behaved according to the human-like patterns appreciated the agent better than participants that conversed with the other agents. Conversations with the optimal version also\ud proceeded more efficiently. Participants needed less time to complete their task

    Computer-aided investigation of interaction mediated by an AR-enabled wearable interface

    Get PDF
    Dierker A. Computer-aided investigation of interaction mediated by an AR-enabled wearable interface. Bielefeld: UniversitĂ€tsbibliothek Bielefeld; 2012.This thesis provides an approach on facilitating the analysis of nonverbal behaviour during human-human interaction. Thereby, much of the work that researchers do starting with experiment control, data acquisition, tagging and finally the analysis of the data is alleviated. For this, software and hardware techniques are used as sensor technology, machine learning, object tracking, data processing, visualisation and Augmented Reality. These are combined into an Augmented-Reality-enabled Interception Interface (ARbInI), a modular wearable interface for two users. The interface mediates the users’ interaction thereby intercepting and influencing it. The ARbInI interface consists of two identical setups of sensors and displays, which are mutually coupled. Combining cameras and microphones with sensors, the system offers to record rich multimodal interaction cues in an efficient way. The recorded data can be analysed online and offline for interaction features (e. g. head gestures in head movements, objects in joint attention, speech times) using integrated machine-learning approaches. The classified features can be tagged in the data. For a detailed analysis, the recorded multimodal data is transferred automatically into file bundles loadable in a standard annotation tool where the data can be further tagged by hand. For statistic analyses of the complete multimodal corpus, a toolbox for use in a standard statistics program allows to directly import the corpus and to automate the analysis of multimodal and complex relationships between arbitrary data types. When using the optional multimodal Augmented Reality techniques integrated into ARbInI, the camera records exactly what the participant can see and nothing more or less. The following additional advantages can be used during the experiment: (a) the experiment can be controlled by using the auditory or visual displays thereby ensuring controlled experimental conditions, (b) the experiment can be disturbed, thus offering to investigate how problems in interaction are discovered and solved, and (c) the experiment can be enhanced by interactively comprising the behaviour of the user thereby offering to investigate how users cope with novel interaction channels. This thesis introduces criteria for the design of scenarios in which interaction analysis can benefit from the experimentation interface and presents a set of scenarios. These scenarios are applied in several empirical studies thereby collecting multimodal corpora that particularly include head gestures. The capabilities of computer-aided interaction analysis for the investigation of speech, visual attention and head movements are illustrated on this empirical data. The effects of the head-mounted display (HMD) are evaluated thoroughly in two studies. The results show that the HMD users need more head movements to achieve the same shift of gaze direction and perform less head gestures with slower velocity and fewer repetitions compared to non-HMD users. From this, a reduced willingness to perform head movements if not necessary can be concluded. Moreover, compensation strategies are established like leaning backwards to enlarge the field of view, and increasing the number of utterances or changing the reference to objects to compensate for the absence of mutual eye contact. Two studies investigate the interaction while actively inducing misunderstandings. The participants here use compensation strategies like multiple verification questions and arbitrary gaze movements. Additionally, an enhancement method that highlights the visual attention of the interaction partner is evaluated in a search task. The results show a significantly shorter reaction time and fewer errors
    • 

    corecore