5,867 research outputs found

    Pacifier overuse and conceptual relations of abstract and emotional concepts

    Get PDF
    This study explores the impact of the extensive use of an oral device since infancy (pacifier) on the acquisition of concrete, abstract, and emotional concepts. While recent evidence showed a negative relation between pacifier use and children’s emotional competence (Niedenthal et al., 2012), the possible interaction between use of pacifier and processing of emotional and abstract language has not been investigated. According to recent theories, while all concepts are grounded in sensorimotor experience, abstract concepts activate linguistic and social information more than concrete ones. Specifically, the Words As Social Tools (WAT) proposal predicts that the simulation of their meaning leads to an activation of the mouth (Borghi and Binkofski, 2014; Borghi and Zarcone, 2016). Since the pacifier affects facial mimicry forcing mouth muscles into a static position, we hypothesize its possible interference on acquisition/consolidation of abstract emotional and abstract not-emotional concepts, which aremainly conveyed during social and linguistic interactions, than of concrete concepts. Fifty-nine first grade children, with a history of different frequency of pacifier use, provided oral definitions of the meaning of abstract not-emotional, abstract emotional, and concrete words. Main effect of concept type emerged, with higher accuracy in defining concrete and abstract emotional concepts with respect to abstract not-emotional concepts, independently from pacifier use. Accuracy in definitions was not influenced by the use of pacifier, butcorrespondence and hierarchical clustering analyses suggest that the use of pacifier differently modulates the conceptual relations elicited by abstract emotional and abstract not-emotional. While the majority of the children produced a similar pattern of conceptual relations, analyses on the few (6) children who overused the pacifier (for more than 3 years) showed that they tend to distinguish less clearly between concrete and abstract emotional concepts and between concrete and abstract not-emotional concepts than children who did not use it (5) or used it for short (17). As to the conceptual relations they produced, children who overused the pacifier tended to refer less to their experience and to social and emotional situations, usemore exemplifications and functional relations, and less free associations

    Just below the surface: developing knowledge management systems using the paradigm of the noetic prism

    Get PDF
    In this paper we examine how the principles embodied in the paradigm of the noetic prism can illuminate the construction of knowledge management systems. We draw on the formalism of the prism to examine three successful tools: frames, spreadsheets and databases, and show how their power and also their shortcomings arise from their domain representation, and how any organisational system based on integration of these tools and conversion between them is inevitably lossy. We suggest how a late-binding, hybrid knowledge based management system (KBMS) could be designed that draws on the lessons learnt from these tools, by maintaining noetica at an atomic level and storing the combinatory processes necessary to create higher level structure as the need arises. We outline the “just-below-the-surface” systems design, and describe its implementation in an enterprise-wide knowledge-based system that has all of the conventional office automation features

    Grounding semantic cognition using computational modelling and network analysis

    Get PDF
    The overarching objective of this thesis is to further the field of grounded semantics using a range of computational and empirical studies. Over the past thirty years, there have been many algorithmic advances in the modelling of semantic cognition. A commonality across these cognitive models is a reliance on hand-engineering “toy-models”. Despite incorporating newer techniques (e.g. Long short-term memory), the model inputs remain unchanged. We argue that the inputs to these traditional semantic models have little resemblance with real human experiences. In this dissertation, we ground our neural network models by training them with real-world visual scenes using naturalistic photographs. Our approach is an alternative to both hand-coded features and embodied raw sensorimotor signals. We conceptually replicate the mutually reinforcing nature of hybrid (feature-based and grounded) representations using silhouettes of concrete concepts as model inputs. We next gradually develop a novel grounded cognitive semantic representation which we call scene2vec, starting with object co-occurrences and then adding emotions and language-based tags. Limitations of our scene-based representation are identified for more abstract concepts (e.g. freedom). We further present a large-scale human semantics study, which reveals small-world semantic network topologies are context-dependent and that scenes are the most dominant cognitive dimension. This finding leads us to conclude that there is no meaning without context. Lastly, scene2vec shows promising human-like context-sensitive stereotypes (e.g. gender role bias), and we explore how such stereotypes are reduced by targeted debiasing. In conclusion, this thesis provides support for a novel computational viewpoint on investigating meaning - scene-based grounded semantics. Future research scaling scene-based semantic models to human-levels through virtual grounding has the potential to unearth new insights into the human mind and concurrently lead to advancements in artificial general intelligence by enabling robots, embodied or otherwise, to acquire and represent meaning directly from the environment

    User Preferences of Spatio-Temporal Referencing Approaches For Immersive 3D Radar Charts

    Full text link
    The use of head-mounted display technologies for virtual reality experiences is inherently single-user-centred, allowing for the visual immersion of its user in the computer-generated environment. This isolates them from their physical surroundings, effectively preventing external visual information cues, such as the pointing and referral to an artifact by another user. However, such input is important and desired in collaborative scenarios when exploring and analyzing data in virtual environments together with a peer. In this article, we investigate different designs for making spatio-temporal references, i.e., visually highlighting virtual data artifacts, within the context of Collaborative Immersive Analytics. The ability to make references to data is foundational for collaboration, affecting aspects such as awareness, attention, and common ground. Based on three design options, we implemented a variety of approaches to make spatial and temporal references in an immersive virtual reality environment that featured abstract visualization of spatio-temporal data as 3D Radar Charts. We conducted a user study (n=12) to empirically evaluate aspects such as aesthetic appeal, legibility, and general user preference. The results indicate a unified favour for the presented location approach as a spatial reference while revealing trends towards a preference of mixed temporal reference approaches dependent on the task configuration: pointer for elementary, and outline for synoptic references. Based on immersive data visualization complexity as well as task reference configuration, we argue that it can be beneficial to explore multiple reference approaches as collaborative information cues, as opposed to following a rather uniform user interface design.Comment: 29 pages, 9 figures, 1 tabl

    Grounded Semantic Composition for Visual Scenes

    Full text link
    We present a visually-grounded language understanding model based on a study of how people verbally describe objects in scenes. The emphasis of the model is on the combination of individual word meanings to produce meanings for complex referring expressions. The model has been implemented, and it is able to understand a broad range of spatial referring expressions. We describe our implementation of word level visually-grounded semantics and their embedding in a compositional parsing framework. The implemented system selects the correct referents in response to natural language expressions for a large percentage of test cases. In an analysis of the system's successes and failures we reveal how visual context influences the semantics of utterances and propose future extensions to the model that take such context into account

    Hybrid Reasoning and the Future of Iconic Representations

    Full text link
    We give a brief overview of the main characteristics of diagrammatic reasoning, analyze a case of human reasoning in a mastermind game, and explain why hybrid representation systems (HRS) are particularly attractive and promising for Artificial General Intelligence and Computer Science in general.Comment: pp. 299-31

    Modelling Learning to Count in Humanoid Robots

    Get PDF
    In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of Plymouth University's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.This thesis concerns the formulation of novel developmental robotics models of embodied phenomena in number learning. Learning to count is believed to be of paramount importance for the acquisition of the remarkable fluency with which humans are able to manipulate numbers and other abstract concepts derived from them later in life. The ever-increasing amount of evidence for the embodied nature of human mathematical thinking suggests that the investigation of numerical cognition with the use of robotic cognitive models has a high potential of contributing toward the better understanding of the involved mechanisms. This thesis focuses on two particular groups of embodied effects tightly linked with learning to count. The first considered phenomenon is the contribution of the counting gestures to the counting accuracy of young children during the period of their acquisition of the skill. The second phenomenon, which arises over a longer time scale, is the human tendency to internally associate numbers with space that results, among others, in the widely-studied SNARC effect. The PhD research contributes to the knowledge in the subject by formulating novel neuro-robotic cognitive models of these phenomena, and by employing these in two series of simulation experiments. In the context of the counting gestures the simulations provide evidence for the importance of learning the number words prior to learning to count, for the usefulness of the proprioceptive information connected with gestures to improving counting accuracy, and for the significance of the spatial correspondence between the indicative acts and the objects being enumerated. In the context of the model of spatial-numerical associations the simulations demonstrate for the first time that these may arise as a consequence of the consistent spatial biases present when children are learning to count. Finally, based on the experience gathered throughout both modelling experiments, specific guidelines concerning future efforts in the application of robotic modelling in mathematical cognition are formulated.This research has been supported by the EU project RobotDoC (235065) from the FP7 Marie Curie Actions ITN
    • 

    corecore