81 research outputs found

    Ameliorating Patient-Caregiver Stigma in Early-Stage Parkinson's Disease using Robot co-Mediators

    Get PDF
    Facial masking in early stage Parkinson’s disease leads to a well-documented deterioration (stigmatization) in the patient-caregiver relationship. This research described in this paper is concerned with preserving dignity in that bond where otherwise it might be lost, through the use of a robot co-mediator that will be capable of monitoring the human-human relationship for lack of congruence in the perceived emotional states of the parties concerned. This paper describes the component architectural modules that are being used in support of this 5-year effort, including an ethical architecture developed earlier for the military and previous research on affective companion robots for Sony and Samsung that are able to express affective state through kinesics and proxemics

    The Processing of Emotional Sentences by Young and Older Adults: A Visual World Eye-movement Study

    Get PDF
    Carminati MN, Knoeferle P. The Processing of Emotional Sentences by Young and Older Adults: A Visual World Eye-movement Study. Presented at the Architectures and Mechanisms of Language and Processing (AMLaP), Riva del Garda, Italy

    Interactions in Virtual Worlds:Proceedings Twente Workshop on Language Technology 15

    Get PDF

    Modelling the relationship between gesture motion and meaning

    Get PDF
    There are many ways to say “Hello,” be it a wave, a nod, or a bow. We greet others not only with words, but also with our bodies. Embodied communication permeates our interactions. A fist bump, thumbs-up, or pat on the back can be even more meaningful than hearing “good job!” A friend crossing their arms with a scowl, turning away from you, or stiffening up can feel like a harsh rejection. Social communication is not exclusively linguistic, but is a multi-sensory affair. It’s not that communication without these bodily cues is impossible, but it is impoverished. Embodiment is a fundamental human experience. Expressing ourselves through our bodies provides a powerful channel through which we express a plethora of meta-social information. And integral to communication, expression, and social engagement is our utilization of conversational gesture. We use gestures to express extra-linguistic information, to emphasize our point, and to embody mental and linguistic metaphors that add depth and color to social interaction. The gesture behaviour of virtual humans when compared to human-human conversation is limited, depending on the approach taken to automate performances of these characters. The generation of nonverbal behaviour for virtual humans can be approximately classified as either: 1) data-driven approaches that learn a mapping from aspects of the verbal channel, such as prosody, to gestures; or 2) rule bases approaches that are often tailored by designers for specific applications. This thesis is an interdisciplinary exploration that bridges these two approaches, and brings data-driven analyses to observational gesture research. By marrying a rich history of gesture research in behavioral psychology with data-driven techniques, this body of work brings rigorous computational methods to gesture classification, analysis, and generation. It addresses how researchers can exploit computational methods to make virtual humans gesture with the same richness, complexity, and apparent effortlessness as you and I. Throughout this work the central focus is on metaphoric gestures. These gestures are capable of conveying rich, nuanced, multi-dimensional meaning, and raise several challenges in their generation, including establishing and interpreting a gesture’s communicative meaning, and selecting a performance to convey it. As such, effectively utilizing these gestures remains an open challenge in virtual agent research. This thesis explores how metaphoric gestures are interpreted by an observer, how one can generate such rich gestures using a mapping between utterance meaning and gesture, as well as how one can use data driven techniques to explore the mapping between utterance and metaphoric gestures. The thesis begins in Chapter 1 by outlining the interdisciplinary space of gesture research in psychology and generation in virtual agents. It then presents several studies that address presupposed assumptions raised about the need for rich, metaphoric gestures and the risk of false implicature when gestural meaning is ignored in gesture generation. In Chapter 2, two studies on metaphoric gestures that embody multiple metaphors argue three critical points that inform the rest of the thesis: that people form rich inferences from metaphoric gestures, these inferences are informed by cultural context and, more importantly, that any approach to analyzing the relation between utterance and metaphoric gesture needs to take into account that multiple metaphors may be conveyed by a single gesture. A third study presented in Chapter 3 highlights the risk of false implicature and discusses this in the context of current subjective evaluations of the qualitative influence of gesture on viewers. Chapters 4 and 5 then present a data-driven analysis approach to recovering an interpretable explicit mapping from utterance to metaphor. The approach described in detail in Chapter 4 clusters gestural motion and relates those clusters to the semantic analysis of associated utterance. Then, Chapter 5 demonstrates how this approach can be used both as a framework for data-driven techniques in the study of gesture as well as form the basis of a gesture generation approach for virtual humans. The framework used in the last two chapters ties together the main themes of this thesis: how we can use observational behavioral gesture research to inform data-driven analysis methods, how embodied metaphor relates to fine-grained gestural motion, and how to exploit this relationship to generate rich, communicatively nuanced gestures on virtual agents. While gestures show huge variation, the goal of this thesis is to start to characterize and codify that variation using modern data-driven techniques. The final chapter of this thesis reflects on the many challenges and obstacles the field of gesture generation continues to face. The potential for applications of Virtual Agents to have broad impacts on our daily lives increases with the growing pervasiveness of digital interfaces, technical breakthroughs, and collaborative interdisciplinary research efforts. It concludes with an optimistic vision of applications for virtual agents with deep models of non-verbal social behaviour and their potential to encourage multi-disciplinary collaboration

    Annotations of maps in collaborative work at a distance

    Get PDF
    This thesis inquires how map annotations can be used to sustain remote collaboration. Maps condense the interplay of space and communication, solving linguistic references by linking conversational content to the actual places to which it refers. This is a mechanism people are accustomed to. When we are face-to-face, we can point to things around us. However, at a distance, we need to recreate a context that can help disambiguate what we mean. A map can help recreate this context. However other technological solutions are required to allow deictic gestures over a shared map when collaborators are not co-located. This mechanism is here termed Explicit Referencing. Several systems that allow sharing maps annotations are reviewed critically. A taxonomy is then proposed to compare their features. Two filed experiments were conducted to investigate the production of collaborative annotations of maps with mobile devices, looking for the reasons why people might want to produce these notes and how they might do so. Both studies led to very disappointing results. The reasons for this failure are attributed to the lack of a critical mass of users (social network), the lack of useful content, and limited social awareness. More importantly, the study identified a compelling effect of the way messages were organized in the tested application, which caused participants to refrain from engaging in content-driven explorations and synchronous discussions. This last qualitative observation was refined in a controlled experiment where remote participants had to solve a problem collaboratively, using chat tools that differed in the way a user could relate an utterance to a shared map. Results indicated that team performance is improved by the Explicit Referencing mechanisms. However, when this is implemented in a way that is detrimental to the linearity of the conversation, resulting in the visual dispersion or scattering of messages, its use has negative consequences for collaborative work at a distance. Additionally, an analysis of the eye movements of the participants over the map helped to ascertain the interplay of deixis and gaze in collaboration. A primary relation was found between the pair's recurrence of eye movements and their task performance. Finally, this thesis presents an algorithm that detects misunderstandings in collaborative work at a distance. It analyses the movements of collaborators' eyes over the shared map, their utterances containing references to this workspace, and the availability of "remote" deictic gestures. The algorithm associates the distance between the gazes of the emitter and gazes of the receiver of a message with the probability that the recipient did not understand the message

    The Role of Socially-Mediated Alignment in the Development of Second Language Grammar and Vocabulary: Comparing Face-to-Face and Synchronous Mobile-Mediated Communication

    Get PDF
    Decades of research has shown that speakers mutually adapt to each other’s linguistic behaviors at different levels of language during dialogue. Recent second language (L2) research has suggested that alignment occurring while L2 learners carry out collaborative activities may lead to L2 development, highlighting the benefits of using alignment activities for L2 learning. However, despite the notion that speakers linguistically align in interactions happening in socially-situated contexts, little is known about the role of social factors in the magnitude and learning outcomes of alignment occurring in L2 interaction. The purpose of the study was to examine the pedagogical benefits of alignment activities for the development of L2 grammar and vocabulary during peer interaction across two different interactional contexts: Face-to-Face (FTF) and synchronous mobile-mediated communication (SMMC; mobile text-chat). The target vocabulary items included 32 words and the target structure was a stranded preposition construction embedded in an English relative clause. Furthermore, this study investigated whether social factors (i.e., L2 learners’ perceptions of their interlocutor’s proficiency, comprehensibility of the interlocutor’s language production, and task experience with the interlocutor) and cognitive factors (i.e., individual differences in language aptitude, cognitive style, and proficiency) would modulate alignment effects. Ninety-eight Korean university students were assigned to either the FTF or SMMC group. They completed two alignment activities in pairs, three measurement tests (pre-, post-, and delayed post-test), various cognitive ability tests, and perception questionnaires over four weeks. Results indicated that alignment occurred at the structural and lexical levels in FTF and SMMC modes, but also that structural alignment was facilitated significantly more in the SMMC mode when compared to FTF. However, there was no significant modality effect on the degree of lexical alignment. Findings also demonstrated beneficial role of alignment activities in L2 grammar and vocabulary learning, irrespective of the modality. Furthermore, results suggested that language proficiency and explicit language aptitude were significantly associated with structural alignment driven learning. Learners’ perceptions did not show a significant impact on the degree of alignment and learning outcomes. Implications for the benefits of interactive alignment activities for L2 development and the effect of modality, social factors, and cognitive factors are discussed

    The significance of silence. Long gaps attenuate the preference for ‘yes’ responses in conversation.

    Get PDF
    In conversation, negative responses to invitations, requests, offers and the like more often occur with a delay – conversation analysts talk of them as dispreferred. Here we examine the contrastive cognitive load ‘yes’ and ‘no’ responses make, either when given relatively fast (300 ms) or delayed (1000 ms). Participants heard minidialogues, with turns extracted from a spoken corpus, while having their EEG recorded. We find that a fast ‘no’ evokes an N400-effect relative to a fast ‘yes’, however this contrast is not present for delayed responses. This shows that an immediate response is expected to be positive – but this expectation disappears as the response time lengthens because now in ordinary conversation the probability of a ‘no’ has increased. Additionally, however, 'No' responses elicit a late frontal positivity both when they are fast and when they are delayed. Thus, regardless of the latency of response, a ‘no’ response is associated with a late positivity, since a negative response is always dispreferred and may require an account. Together these results show that negative responses to social actions exact a higher cognitive load, but especially when least expected, as an immediate response

    Applied and Computational Linguistics

    Get PDF
    Розглядається сучасний стан прикладної та комп’ютерної лінгвістики, проаналізовано лінгвістичні теорії 20-го – початку 21-го століть під кутом розмежування різних аспектів мови з метою формалізованого опису у електронних лінгвістичних ресурсах. Запропоновано критичний огляд таких актуальних проблем прикладної (комп’ютерної) лінгвістики як укладання комп’ютерних лексиконів та електронних текстових корпусів, автоматична обробка природної мови, автоматичний синтез та розпізнавання мовлення, машинний переклад, створення інтелектуальних роботів, здатних сприймати інформацію природною мовою. Для студентів та аспірантів гуманітарного профілю, науково-педагогічних працівників вищих навчальних закладів України

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience
    corecore