2,992 research outputs found

    Symbolic Representations versus Embodiment: A Test Using Semantic Neighbours and Iconicity

    Get PDF
    According to the symbolic representation account, word meaning can be sufficiently captured by lexical co-occurrence models (Markman & Dietrich, 2000). In contrast, the embodied cognition account maintains that words are understood via simulated perceptual experiences (Barsalou, 1999). The Symbol Interdependency Hypothesis reconciles these different approaches by proposing that we use symbolic representation most of the time and embodied approaches when deeper processing is required (Louwerse, 2007). To test this hypothesis, a series of experiments manipulated symbolic and embodied factors in shallow and deep processing tasks. Concreteness was also manipulated because it is thought to interact with depth of processing. Overall, results support the Symbol Interdependency Hypothesis. Reaction times were shorter for shallow processing tasks, close semantic neighbours, and iconic word pairs. Moreover, only the embodied factor, and not the symbolic factor, played a role in the deep processing task

    Processing Concrete and Abstract Relationships in Word Pairs

    Get PDF
    Malhi (2015) found a reverse concreteness, or abstractness, effect for word pairs in an iconicity judgment task. Per Vigliocco et al.’s (2009) theory of embodied abstract semantics, Malhi and Buchanan (2017) hypothesized that participants were taking a visualization approach (time-costly) towards the concrete word pairs and an emotional valence approach (time-efficient) towards the abstract word pairs. It was also hypothesized that the abstractness effect emerged not by considering single words in isolation but rather by considering the relationship between them. The goal of the present study was to test these hypotheses and to further investigate this reverse concreteness, or abstractness, effect. Results generally provided support for these hypotheses. An event-related potential (ERP) experiment revealed a dissociation between behavioural abstractness and neural concreteness. The results are interpreted using a proposed theory of flexible abstractness and concreteness effects (FACE)

    The Abstract Language: Symbolic Cogniton And Its Relationship To Embodiment

    Get PDF
    Embodied theories presume that concepts are modality specific while symbolic theories suggest that all modalities for a given concept are integrated. Symbolic and embodied theories do fairly well with explaining and describing concrete concepts. Specifically, embodied theories seem well suited to describing the actual content of a concept while symbolic theories provide insight into how concepts operate. Conversely, neither symbolic nor embodied theories have been fully sufficient when attempting to describe and explain abstract concepts. Several pluralistic accounts have been put forth to describe how the semantic/lexical system interacts with the conceptual system. In this respect, they attempt to “embody” abstract concepts to the same extent as concrete concepts. Nevertheless, a concise and comprehensive theory for explaining how we learn/understand abstract concepts to the extent that we learn/understand concrete concepts remains elusive. One goal of the present review paper is to consider if abstract concepts can be defined by a unified theory or if subsets of abstract concepts will be defined by separate theories. Of particular focus will be Symbolic Interdependency Theory (SIT). It will be argued that SIT is suitable for grounding abstract concepts, as this theory infers that symbols bootstrap meaning from other symbols, highlighting the importance of abstract-to-abstract mapping in the same way that concrete-to-abstract mappings are created. Research will be considered to help outline a cohesive strategy for describing and understanding abstract concepts. Finally, as research has demonstrated efficiencies with concrete concept processing, analogous efficiencies will be explored for developing an understanding of abstract concepts. Such efforts could have both theoretical and practical implications for bolstering our knowledge of concept learning

    Grounding semantic cognition using computational modelling and network analysis

    Get PDF
    The overarching objective of this thesis is to further the field of grounded semantics using a range of computational and empirical studies. Over the past thirty years, there have been many algorithmic advances in the modelling of semantic cognition. A commonality across these cognitive models is a reliance on hand-engineering “toy-models”. Despite incorporating newer techniques (e.g. Long short-term memory), the model inputs remain unchanged. We argue that the inputs to these traditional semantic models have little resemblance with real human experiences. In this dissertation, we ground our neural network models by training them with real-world visual scenes using naturalistic photographs. Our approach is an alternative to both hand-coded features and embodied raw sensorimotor signals. We conceptually replicate the mutually reinforcing nature of hybrid (feature-based and grounded) representations using silhouettes of concrete concepts as model inputs. We next gradually develop a novel grounded cognitive semantic representation which we call scene2vec, starting with object co-occurrences and then adding emotions and language-based tags. Limitations of our scene-based representation are identified for more abstract concepts (e.g. freedom). We further present a large-scale human semantics study, which reveals small-world semantic network topologies are context-dependent and that scenes are the most dominant cognitive dimension. This finding leads us to conclude that there is no meaning without context. Lastly, scene2vec shows promising human-like context-sensitive stereotypes (e.g. gender role bias), and we explore how such stereotypes are reduced by targeted debiasing. In conclusion, this thesis provides support for a novel computational viewpoint on investigating meaning - scene-based grounded semantics. Future research scaling scene-based semantic models to human-levels through virtual grounding has the potential to unearth new insights into the human mind and concurrently lead to advancements in artificial general intelligence by enabling robots, embodied or otherwise, to acquire and represent meaning directly from the environment

    Linking somatic and symbolic representation in semantic memory: the dynamic multilevel reactivation framework

    Get PDF
    Biological plausibility is an essential constraint for any viable model of semantic memory. Yet, we have only the most rudimentary understanding of how the human brain conducts abstract symbolic transformations that underlie word and object meaning. Neuroscience has evolved a sophisticated arsenal of techniques for elucidating the architecture of conceptual representation. Nevertheless, theoretical convergence remains elusive. Here we describe several contrastive approaches to the organization of semantic knowledge, and in turn we offer our own perspective on two recurring questions in semantic memory research: (1) to what extent are conceptual representations mediated by sensorimotor knowledge (i.e., to what degree is semantic memory embodied)? (2) How might an embodied semantic system represent abstract concepts such as modularity, symbol, or proposition? To address these questions, we review the merits of sensorimotor (i.e., embodied) and amodal (i.e., disembodied) semantic theories and address the neurobiological constraints underlying each. We conclude that the shortcomings of both perspectives in their extreme forms necessitate a hybrid middle ground. We accordingly propose the Dynamic Multilevel Reactivation Framework—an integrative model predicated upon flexible interplay between sensorimotor and amodal symbolic representations mediated by multiple cortical hubs. We discuss applications of the dynamic multilevel reactivation framework to abstract and concrete concept representation and describe how a multidimensional conceptual topography based on emotion, sensation, and magnitude can successfully frame a semantic space containing meanings for both abstract and concrete words. The consideration of ‘abstract conceptual features’ does not diminish the role of logical and/or executive processing in activating, manipulating and using information stored in conceptual representations. Rather, it proposes that the materials upon which these processes operate necessarily combine pure sensorimotor information and higher-order cognitive dimensions involved in symbolic representation

    Three symbol ungrounding problems: Abstract concepts and the future of embodied cognition

    Get PDF
    A great deal of research has focused on the question of whether or not concepts are embodied as a rule. Supporters of embodiment have pointed to studies that implicate affective and sensorimotor systems in cognitive tasks, while critics of embodiment have offered nonembodied explanations of these results and pointed to studies that implicate amodal systems. Abstract concepts have tended to be viewed as an important test case in this polemical debate. This essay argues that we need to move beyond a pretheoretical notion of abstraction. Against the background of current research and theory, abstract concepts do not pose a single, unified problem for embodied cognition but, instead, three distinct problems: the problem of generalization, the problem of flexibility, and the problem of disembodiment. Identifying these problems provides a conceptual framework for critically evaluating, and perhaps improving upon, recent theoretical proposals

    Augmenting Conceptualization by Visual Knowledge Organization

    Get PDF

    Towards the Grounding of Abstract Words: A Neural Network Model for Cognitive Robots

    Get PDF
    In this paper, a model based on Artificial Neural Networks (ANNs) extends the symbol grounding mechanism toabstract words for cognitive robots. The aim of this work is to obtain a semantic representation of abstract concepts through the grounding in sensorimotor experiences for a humanoid robotic platform. Simulation experiments have been developed on a software environment for the iCub robot. Words that express general actions with a sensorimotor component are first taught to the simulated robot. During the training stage the robot first learns to perform a set of basic action primitives through the mechanism of direct grounding. Subsequently, the grounding of action primitives, acquired via direct sensorimotor experience, is transferred to higher-order words via linguistic descriptions. The idea is that by combining words grounded in sensorimotor experience the simulated robot can acquire more abstract concepts. The experiments aim to teach the robot the meaning of abstract words by making it experience sensorimotor actions. The iCub humanoid robot will be used for testing experiments on a real robotic architecture

    Classification systems offer a microcosm of issues in conceptual processing: A commentary on Kemmerer (2016)

    Get PDF
    This is a commentary on Kemmerer (2016), Categories of Object Concepts Across Languages and Brains: The Relevance of Nominal Classification Systems to Cognitive Neuroscience, DOI: 10.1080/23273798.2016.1198819
    corecore