1,633 research outputs found

    Multimodal Grounding for Language Processing

    Get PDF
    This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language. We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze different methods for combining multimodal representations. Based on this methodological inventory, we discuss the benefit of multimodal grounding for a variety of language processing tasks and the challenges that arise. We particularly focus on multimodal grounding of verbs which play a crucial role for the compositional power of language.Comment: The paper has been published in the Proceedings of the 27 Conference of Computational Linguistics. Please refer to this version for citations: https://www.aclweb.org/anthology/papers/C/C18/C18-1197

    A Data-driven Approach to the Semantics of Iconicity in American Sign Language and English

    Get PDF
    A growing body of research shows that both signed and spoken languages display regular patterns of iconicity in their vocabularies. We compared iconicity in the lexicons of American Sign Language (ASL) and English by combining previously collected ratings of ASL signs (Caselli, Sevcikova Sehyr, Cohen-Goldberg, & Emmorey, 2017) and English words (Winter, Perlman, Perry, & Lupyan, 2017) with the use of data-driven semantic vectors derived from English. Our analyses show that models of spoken language lexical semantics drawn from large text corpora can be useful for predicting the iconicity of signs as well as words. Compared to English, ASL has a greater number of regions of semantic space with concentrations of highly iconic vocabulary. There was an overall negative relationship between semantic density and the iconicity of both English words and ASL signs. This negative relationship disappeared for highly iconic signs, suggesting that iconic forms may be more easily discriminable in ASL than in English. Our findings contribute to an increasingly detailed picture of how iconicity is distributed across different languages

    Are abstract concepts like dinosaur feathers? Objectification as a conceptual tool: evidence from language and gesture of English and Polish native speakers

    Get PDF
    Studies based on the Contemporary Theory of Metaphor (Lakoff & Johnson, 1980, 1999) usually identify conceptual metaphors by analysing linguistic expressions and creating a post hoc interpretation of the findings. This method has been questioned for a variety of reasons, including its circularity (Müller, 2008), lack of falsifiability (Vervaeke & Kennedy, 1996, 2004), and lack of predictive power (Ritchie, 2003). It has been argued that CTM requires additional constraints to improve its applicability for empirical research (Gibbs, 2011; Ritchie, 2003). This paper sets out to propose additional methodological structure to CTM, a theory of conceptual metaphor in which much of abstract thought is generated by metaphorical mapping from embodied experience (Ruiz de Mendoza Ibáñez & Pérez Hernández, 2011). Introducing Objectification Theory defined by Szwedek (2002, 2007, 2011) ameliorates a number of methodological issues in CTM. First, the embodiment claim of CTM in its current form cannot be empirically proven incorrect (Vervaeke & Kennedy, 2004) as any mapping within it is possible (although only some actually happen). Objectification introduces pre-metaphorical structure of the kind suggested by Glucksberg (2001), constraining source and target domain selection, predicting which mappings are more likely to happen. Second, while many claim that metaphors trace back to a literal concept based on embodied physical experience (Gibbs, Costa Lima, & Francozo, 2004), it is unclear what criteria are used to define „physical”. Metaphorical domains are often described using the terms „abstract” and „concrete”, Objectification proposes objective criteria for deciding whether a concept is experientially grounded. Finally, Objectification provides grounds for introducing a hierarchical framework for metaphor typology, preventing post-hoc addition of metaphor types if and when suitable for the explanation of a phenomenon; thus increasing the consistency of the CTM framework, both internally and with other cognitive science disciplines. This thesis focuses on providing evidence for Objectification Theory and identifying its applications in metaphor and gesture research

    Are abstract concepts like dinosaur feathers?

    Get PDF
    Ewolucja ludzkiego układu nerwowego pozwoliła nam na wykonywanie niezwykle skomplikowanych czynności takich jak obliczenia matematyczne, analizy gospodarcze czy choćby napisanie tej książki. Mimo to wciąż nie jesteśmy pewni jak i dlaczego człowiek nabył zdolność abstrakcyjnego myślenia. Jedna z teorii sugeruje, że myślenie abstrakcyjne i konkretne opierają się na tym samym mechanizmie: doświadczeniu. Według tej teorii, nazwanej teorią ucieleśnionego poznania, świat rozumiemy dzięki doświadczeniom fizycznym. Kiedy opisujemy jakiś argument jako "chwiejny" albo pogląd jako "bezpodstawny" to korzystamy z doświadczeń, które zdobyliśmy bawiąc się kolckami jako dzieci. W tej książce zadaję postawione przez psychologa Daniela Casasanto pytanie: „czy pojęcia abstrakcyjne są jak pióra dinozaurów”. Jakie procesy ewolucyjne doprowadziły do tego, że jesteśmy w stanie opisać nawet bardzo abstrakcyjne zagadnienia w odniesieniu do konkretnych zjawisk? Przedstawiając wyniki badań nad mową i gestem osób widzących, słabowidzących oraz niewidomych, staram się pokazać, że podstawy zrozumienia wielu pojęć abstrakcyjnych szukać można w geście

    Assessing the educational potential and language content of touchscreen apps for preschool children

    Get PDF
    Touchscreen apps have the potential to teach children important early skills including oral language. However, there is little empirical data assessing the educational potential of children's apps in the app market or how apps link to theories of cognitive development to support learning. We compared popular children's apps with a learning goal (N=18) and without (N=26) using systematic evaluation tools to assess the educational potential and app features that may support learning. We also transcribed all utterances in the apps that included language with a learning goal (N=18) and without (N=12) in order to compare a number of psycholinguistic measures relating to accessibility of the language. Apps with a learning goal had higher educational potential, more opportunities for feedback, a higher proportion of ostensive feedback, and age-appropriate language to support learning and language development. Thus, we argue that selecting children's apps based on the presence of a learning goal is a good first step for selecting an educational app for pre-school age children. Nevertheless, app developers could do more to promote exploratory app use, adjust content to a child's performance, and make use of social interactions with characters onscreen in their apps to enhance the educational potential. Children's apps could also make better use of feedback to ensure that it is specific, meaningful and constructive to better facilitate learning

    A faster path between meaning and form? Iconicity facilitates sign recognition and production in British Sign Language

    Get PDF
    A standard view of language processing holds that lexical forms are arbitrary, and that non-arbitrary relationships between meaning and form such as onomatopoeias are unusual cases with little relevance to language processing in general. Here we capitalize on the greater availability of iconic lexical forms in a signed language (British Sign Language, BSL), to test how iconic relationships between meaning and form affect lexical processing. In three experiments, we found that iconicity in BSL facilitated picture-sign matching, phonological decision, and picture naming. In comprehension the effect of iconicity did not interact with other factors, but in production it was observed only for later-learned signs. These findings suggest that iconicity serves to activate conceptual features related to perception and action during lexical processing. We suggest that the same should be true for iconicity in spoken languages (e.g., onomatopoeias), and discuss the implications this has for general theories of lexical processing

    Multimodal Grounding for Language Processing

    Get PDF

    Semantic Bimodal Presentation Differentially Slows Working Memory Retrieval

    Get PDF
    Although evidence has shown that working memory (WM) can be differentially affected by the multisensory congruency of different visual and auditory stimuli, it remains unclear whether different multisensory congruency about concrete and abstract words could impact further WM retrieval. By manipulating the attention focus toward different matching conditions of visual and auditory word characteristics in a 2-back paradigm, the present study revealed that for the characteristically incongruent condition under the auditory retrieval condition, the response to abstract words was faster than that to concrete words, indicating that auditory abstract words are not affected by visual representation, while auditory concrete words are. Alternatively, for concrete words under the visual retrieval condition, WM retrieval was faster in the characteristically incongruent condition than in the characteristically congruent condition, indicating that visual representation formed by auditory concrete words may interfere with WM retrieval of visual concrete words. The present findings demonstrated that concrete words in multisensory conditions may be too aggressively encoded with other visual representations, which would inadvertently slow WM retrieval. However, abstract words seem to suppress interference better, showing better WM performance than concrete words in the multisensory condition

    Discoverable Free Space Gesture Sets for Walk-Up-and-Use Interactions

    Get PDF
    abstract: Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by the rise of systems such as the Microsoft Kinect. However, much of the interaction design research for such systems is still focused on desktop and touch interactions. Current thinking in free-space gestures are limited in capability and imagination and most gesture studies have not attempted to identify gestures appropriate for public walk-up-and-use applications. A walk-up-and-use display must be discoverable, such that first-time users can use the system without any training, flexible, and not fatiguing, especially in the case of longer-term interactions. One mechanism for defining gesture sets for walk-up-and-use interactions is a participatory design method called gesture elicitation. This method has been used to identify several user-generated gesture sets and shown that user-generated sets are preferred by users over those defined by system designers. However, for these studies to be successfully implemented in walk-up-and-use applications, there is a need to understand which components of these gestures are semantically meaningful (i.e. do users distinguish been using their left and right hand, or are those semantically the same thing?). Thus, defining a standardized gesture vocabulary for coding, characterizing, and evaluating gestures is critical. This dissertation presents three gesture elicitation studies for walk-up-and-use displays that employ a novel gesture elicitation methodology, alongside a novel coding scheme for gesture elicitation data that focuses on features most important to users’ mental models. Generalizable design principles, based on the three studies, are then derived and presented (e.g. changes in speed are meaningful for scroll actions in walk up and use displays but not for paging or selection). The major contributions of this work are: (1) an elicitation methodology that aids users in overcoming biases from existing interaction modalities; (2) a better understanding of the gestural features that matter, e.g. that capture the intent of the gestures; and (3) generalizable design principles for walk-up-and-use public displays.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    The influence of hand gestures on reading comprehension

    Get PDF
    Hand gestures used in conjunction with speech can provide more concrete and accurate information than through speech alone (Wang, Bernas, & Eberhard, 2004). The purpose of this study was to explore the effectiveness of hand gestures on reading comprehension. To examine this hypothesis the researcher designed an eight week study, incorporating the use of hand gestures into the reading lessons and collected data. Eleven second grade students participated in reading lessons which included vocabulary development, a reading strategy focus and practice, and reading of weekly story selection. Data derived from pre and post-reading/comprehension assessments, weekly comprehension tests, and Theme Skills tests showed that the participants’ reading comprehension had increased through the use of hand gestures during reading instruction of new vocabulary words and reading strategies
    corecore