1,193 research outputs found

    Semantic memory

    Get PDF
    The Encyclopedia of Human Behavior, Second Edition is a comprehensive three-volume reference source on human action and reaction, and the thoughts, feelings, and physiological functions behind those actions

    Introduction: The Third International Conference on Epigenetic Robotics

    Get PDF
    This paper summarizes the paper and poster contributions to the Third International Workshop on Epigenetic Robotics. The focus of this workshop is on the cross-disciplinary interaction of developmental psychology and robotics. Namely, the general goal in this area is to create robotic models of the psychological development of various behaviors. The term "epigenetic" is used in much the same sense as the term "developmental" and while we could call our topic "developmental robotics", developmental robotics can be seen as having a broader interdisciplinary emphasis. Our focus in this workshop is on the interaction of developmental psychology and robotics and we use the phrase "epigenetic robotics" to capture this focus

    Stimulus-independent neural coding of event semantics:Evidence from cross-sentence fMRI decoding

    Get PDF
    Multivariate neuroimaging studies indicate that the brain represents word and object concepts in a format that readily generalises across stimuli. Here we investigated whether this was true for neural representations of simple events described using sentences. Participants viewed sentences describing four events in different ways. Multivariate classifiers were trained to discriminate the four events using a subset of sentences, allowing us to test generalisation to novel sentences. We found that neural patterns in a left-lateralised network of frontal, temporal and parietal regions discriminated events in a way that generalised successfully over changes in the syntactic and lexical properties of the sentences used to describe them. In contrast, decoding in visual areas was sentence-specific and failed to generalise to novel sentences. In the reverse analysis, we tested for decoding of syntactic and lexical structure, independent of the event being described. Regions displaying this coding were limited and largely fell outside the canonical semantic network. Our results indicate that a distributed neural network represents the meaning of event sentences in a way that is robust to changes in their structure and form. They suggest that the semantic system disregards the surface properties of stimuli in order to represent their underlying conceptual significance

    The neuro-cognitive representation of word meaning resolved in space and time.

    Get PDF
    One of the core human abilities is that of interpreting symbols. Prompted with a perceptual stimulus devoid of any intrinsic meaning, such as a written word, our brain can access a complex multidimensional representation, called semantic representation, which corresponds to its meaning. Notwithstanding decades of neuropsychological and neuroimaging work on the cognitive and neural substrate of semantic representations, many questions are left unanswered. The research in this dissertation attempts to unravel one of them: are the neural substrates of different components of concrete word meaning dissociated? In the first part, I review the different theoretical positions and empirical findings on the cognitive and neural correlates of semantic representations. I highlight how recent methodological advances, namely the introduction of multivariate methods for the analysis of distributed patterns of brain activity, broaden the set of hypotheses that can be empirically tested. In particular, they allow the exploration of the representational geometries of different brain areas, which is instrumental to the understanding of where and when the various dimensions of the semantic space are activated in the brain. Crucially, I propose an operational distinction between motor-perceptual dimensions (i.e., those attributes of the objects referred to by the words that are perceived through the senses) and conceptual ones (i.e., the information that is built via a complex integration of multiple perceptual features). In the second part, I present the results of the studies I conducted in order to investigate the automaticity of retrieval, topographical organization, and temporal dynamics of motor-perceptual and conceptual dimensions of word meaning. First, I show how the representational spaces retrieved with different behavioral and corpora-based methods (i.e., Semantic Distance Judgment, Semantic Feature Listing, WordNet) appear to be highly correlated and overall consistent within and across subjects. Second, I present the results of four priming experiments suggesting that perceptual dimensions of word meaning (such as implied real world size and sound) are recovered in an automatic but task-dependent way during reading. Third, thanks to a functional magnetic resonance imaging experiment, I show a representational shift along the ventral visual path: from perceptual features, preferentially encoded in primary visual areas, to conceptual ones, preferentially encoded in mid and anterior temporal areas. This result indicates that complementary dimensions of the semantic space are encoded in a distributed yet partially dissociated way across the cortex. Fourth, by means of a study conducted with magnetoencephalography, I present evidence of an early (around 200 ms after stimulus onset) simultaneous access to both motor-perceptual and conceptual dimensions of the semantic space thanks to different aspects of the signal: inter-trial phase coherence appears to be key for the encoding of perceptual while spectral power changes appear to support encoding of conceptual dimensions. These observations suggest that the neural substrates of different components of symbol meaning can be dissociated in terms of localization and of the feature of the signal encoding them, while sharing a similar temporal evolution

    Multimodal MRI characterization of visual word recognition: an integrative view

    Get PDF
    228 p.The ventral occipito-temporal (vOT) association cortex contributes significantly to recognize different types of visual patterns. It is widely accepted that a subset of this circuitry, including the visual word form area (VWFA), becomes trained to perform the task of rapidly identifying word forms. An important open question is the computational role of this circuitry: To what extent is part of a bottom-up hierarchical processing of information on visual word recognition and/or is involved in processing top-down signals from higher-level language regions. This doctoral dissertation thesis proposal is aimed at characterizing the vOT reading circuitry using behavioral, functional, structural and quantitative MRI indexes, and linking its computations to the other two important regions within the language network: the posterior parietal cortex (pPC) and the inferior frontal gyrus (IFG). Results revealed that two distinct word-responsive areas can be segregated in the vOT: one responsible for visual feature extraction that is connected to the intraparietal sulcus via the vertical occipital fasciculus and a second one responsible for semantic processing that is connected to the angular gyrus via the posterior arcuate fasciculus and to the IFG via the anterior arcuate fasciculus. Importantly, reading behavior was predicted by functional activation in regions identified along the vOT, pPC and IFG, as well as by structural properties of the white matter fiber tracts linking them. The present work constitutes a critical step in the creation of a highly detailed characterization of the early stages of reading at the individual-subject level and to establish a baseline model and parameter range that might serve to clarify functional and structural differences between typical, poor and atypical readers.BCBL: basque center on cognition, brain and languag

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    An integrated theory of language production and comprehension

    Get PDF
    Currently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. Specifically, we assume that actors construct forward models of their actions before they execute those actions, and that perceivers of others' actions covertly imitate those actions, then construct forward models of those actions. We use these accounts of action, action perception, and joint action to develop accounts of production, comprehension, and interactive language. Importantly, they incorporate well-defined levels of linguistic representation (such as semantics, syntax, and phonology). We show (a) how speakers and comprehenders use covert imitation and forward modeling to make predictions at these levels of representation, (b) how they interweave production and comprehension processes, and (c) how they use these predictions to monitor the upcoming utterances. We show how these accounts explain a range of behavioral and neuroscientific data on language processing and discuss some of the implications of our proposal

    Converging evidence for functional and structural segregation within the left ventral occipitotemporal cortex in reading

    Get PDF
    Published online September 17, 2018The ventral occipitotemporal cortex (vOTC) is crucial for recognizing visual patterns, and previous evidence suggests that there may be different subregions within the vOTC involved in the rapid identification of word forms. Here, we characterize vOTC reading circuitry using a multimodal approach combining functional, structural, and quantitative MRI and behavioral data. Two main word-responsive vOTC areas emerged: a posterior area involved in visual feature extraction, structurally connected to the intraparietal sulcus via the vertical occipital fasciculus; and an anterior area involved in integrating information with other regions of the language network, structurally connected to the angular gyrus via the posterior arcuate fasciculus. Furthermore, functional activation in these vOTC regions predicted reading behavior outside of the scanner. Differences in the microarchitectonic properties of gray-matter cells in these segregated areas were also observed, in line with earlier cytoarchitectonic evidence. These findings advance our understanding of the vOTC circuitry by linking functional responses to anatomical structure, revealing the pathways of distinct reading-related processes.This work was supported by European Molecular Biology Organization (EMBO, Short-Term Fellowship 158-2015) and Marie Sklodowska-Curie (H2020-MSCA-IF-2017-795807-ReCiModel) grants (to G.L.-U.); Spanish Ministry of Economy and Competitiveness (MINECO, PSI2015- 67353-R, SEV-2015-0490) and European Research Council (ERC, ERC-2011- ADG-295362) grants (to M.C.); and MINECO (RYC-2014-15440, PSI2012- 32093, SEV-2015-0490) and Departamento de Desarrollo Económico y Competitividad, Gobierno Vasco (PI2016-12) grants (to P.M.P.-A.)

    Event Structure In Vision And Language

    Get PDF
    Our visual experience is surprisingly rich: We do not only see low-level properties such as colors or contours; we also see events, or what is happening. Within linguistics, the examination of how we talk about events suggests that relatively abstract elements exist in the mind which pertain to the relational structure of events, including general thematic roles (e.g., Agent), Causation, Motion, and Transfer. For example, “Alex gave Jesse flowers” and “Jesse gave Alex flowers” both refer to an event of transfer, with the directionality of the transfer having different social consequences. The goal of the present research is to examine the extent to which abstract event information of this sort (event structure) is generated in visual perceptual processing. Do we perceive this information, just as we do with more ‘traditional’ visual properties like color and shape? In the first study (Chapter 2), I used a novel behavioral paradigm to show that event roles – who is acting on whom – are rapidly and automatically extracted from visual scenes, even when participants are engaged in an orthogonal task, such as color or gender identification. In the second study (Chapter 3), I provided functional magnetic resonance (fMRI) evidence for commonality in content between neural representations elicited by static snapshots of actions and by full, dynamic action sequences. These two studies suggest that relatively abstract representations of events are spontaneously extracted from sparse visual information. In the final study (Chapter 4), I return to language, the initial inspiration for my investigations of events in vision. Here I test the hypothesis that the human brain represents verbs in part via their associated event structures. Using a model of verbs based on event-structure semantic features (e.g., Cause, Motion, Transfer), it was possible to successfully predict fMRI responses in language-selective brain regions as people engaged in real-time comprehension of naturalistic speech. Taken together, my research reveals that in both perception and language, the mind rapidly constructs a representation of the world that includes events with relational structure
    corecore