1,893 research outputs found

    Independent coding of absolute duration and distance magnitudes in the prefrontal cortex

    Get PDF
    The estimation of space and time can interfere with each other, and neuroimaging studies have shown overlapping activation in the parietal and prefrontal cortical areas. We used duration and distance discrimination tasks to determine whether space and time share resources in prefrontal cortex (PF) neurons. Monkeys were required to report which of two stimuli, a red circle or blue square, presented sequentially, were longer and farther, respectively, in the duration and distance tasks. In a previous study, we showed that relative duration and distance are coded by different populations of neurons and that the only common representation is related to goal coding. Here, we examined the coding of absolute duration and distance. Our results support a model of independent coding of absolute duration and distance metrics by demonstrating that not only relative magnitude but also absolute magnitude are independently coded in the PF

    Neural representations of food: Disentangling the unprocessed and processed dimension

    Get PDF
    Food is fuel for life. Our feeding behaviors are guided by both homeostatic and hedonic (or reward-based) mechanisms. By simply inspecting visually presented food stimuli, our brain extracts information such as edibility or caloric content, as described by the results of the meta-analysis. However, whether such ability extends to the discrimination between unprocessed and processed foods is to date unknown. Therefore, the aim of the present thesis is to understand whether this particular dimension, that has been hypothesized to have a central role in human evolution (Cooking hypothesis), has a brain signature and how it affects food preferences and choices. All these aspects are introduced in Chapter 1 of my thesis while in the following ones (Chapters 2-4) I will report original studies in which I used different techniques. In Study 1, explicit and implicit evaluations towards foods have been investigated using explicit ratings and the Implicit Association Test (IAT), in order to explore whether evaluations differed based on the food type (unprocessed vs processed) (Chapter 2). The results of Study 1 showed that both at the explicit and implicit level normal-weight participants held different evaluations towards the stimuli depending on the food type. Also, participants\u2019 hunger level, BMI and gender were found to modulate participants\u2019 evaluations, but only at the explicit level. Interestingly, a strong influence of participants\u2019 dietary habits was found at the implicit level. Using electroencephalography (EEG), in Study 2 I aimed at investigating whether the difference between unprocessed and processed foods had a detectable neural signature and whether the brain rapidly discriminates between these food types as an adaptive behavior (Chapter 3). The spatio-temporal dynamics of the distinction between unprocessed and processed foods in normal-weight individuals showed that as early as 130 ms post-stimulus onset differences in amplitude emerged. Other within-category discriminations involving food stimuli (i.e. caloric content), as well as other biologically relevant stimuli such as faces or animals, have been observed within this time window. This study is the first to show distinct brain responses to unprocessed and processed foods in a simple food vs non-food categorization task. In Study 3 I used functional magnetic resonance imaging (fMRI) with the aim of disentangling the brain responses to different foods in the regions which greatly respond to foods compared to other non-edible objects (Chapter 4). Moreover, the results show how different brain regions responded to unprocessed and processed foods while normal-weight individuals were performing a simple one-back task. In final chapter I discussed the main findings obtained in my studies in the light of the extant literature, with particular emphasis on the processed-unprocessed dimension (Chapter 5)

    The Effect of Learning on the Function of Monkey Extrastriate Visual Cortex

    Get PDF
    One of the most remarkable capabilities of the adult brain is its ability to learn and continuously adapt to an ever-changing environment. While many studies have documented how learning improves the perception and identification of visual stimuli, relatively little is known about how it modifies the underlying neural mechanisms. We trained monkeys to identify natural images that were degraded by interpolation with visual noise. We found that learning led to an improvement in monkeys' ability to identify these indeterminate visual stimuli. We link this behavioral improvement to a learning-dependent increase in the amount of information communicated by V4 neurons. This increase was mediated by a specific enhancement in neural activity. Our results reveal a mechanism by which learning increases the amount of information that V4 neurons are able to extract from the visual environment. This suggests that V4 plays a key role in resolving indeterminate visual inputs by coordinated interaction between bottom-up and top-down processing streams

    The reentry hypothesis: The putative interaction of the frontal eye field, ventrolateral prefrontal cortex, and areas V4, IT for attention and eye movement

    Get PDF
    Attention is known to play a key role in perception, including action selection, object recognition and memory. Despite findings revealing competitive interactions among cell populations, attention remains difficult to explain. The central purpose of this paper is to link up a large number of findings in a single computational approach. Our simulation results suggest that attention can be well explained on a network level involving many areas of the brain. We argue that attention is an emergent phenomenon that arises from reentry and competitive interactions. We hypothesize that guided visual search requires the usage of an object-specific template in prefrontal cortex to sensitize V4 and IT cells whose preferred stimuli match the target template. This induces a feature-specific bias and provides guidance for eye movements. Prior to an eye movement, a spatially organized reentry from occulomotor centers, specifically the movement cells of the frontal eye field, occurs and modulates the gain of V4 and IT cells. The processes involved are elucidated by quantitatively comparing the time course of simulated neural activity with experimental data. Using visual search tasks as an example, we provide clear and empirically testable predictions for the participation of IT, V4 and the frontal eye field in attention. Finally, we explain a possible physiological mechanism that can lead to non-flat search slopes as the result of a slow, parallel discrimination process

    Visual scanning patterns and executive function in relation to facial emotion recognition in aging

    Full text link
    OBJECTIVE: The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. METHODS: We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. RESULTS: OA were less accurate than YA at identifying fear (p < .05, r = .44) and more accurate at identifying disgust (p < .05, r = .39). OA fixated less than YA on the top half of the face for disgust, fearful, happy, neutral, and sad faces (p values < .05, r values ≄ .38), whereas there was no group difference for landscapes. For OA, executive function was correlated with recognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. CONCLUSION: We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition.Accepted manuscrip

    Interaction of numerosity and time in prefrontal and parietal cortex

    Get PDF
    It has been proposed that numerical and temporal information are processed by partially overlapping magnitude systems. Interactions across different magnitude domains could occur both at the level of perception and decision-making. However, their neural correlates have been elusive. Here, using functional magnetic resonance imaging in humans, we show that the right intraparietal cortex (IPC) and inferior frontal gyrus (IFG) are jointly activated by duration and numerosity discrimination tasks, with a congruency effect in the right IFG. To determine whether the IPC and the IFG are involved in response conflict (or facilitation) or modulation of subjective passage of time by numerical information, we examined their functional roles using transcranial magnetic stimulation (TMS) and two different numerosity-time interaction tasks: duration discrimination and time reproduction tasks. Our results show that TMS of the right IFG impairs categorical duration discrimination, whereas that of the right IPC modulates the degree of influence of numerosity on time perception and impairs precise time estimation. These results indicate that the right IFG is specifically involved at the categorical decision stage, whereas bleeding of numerosity information on perception of time occurs within the IPC. Together, our findings suggest a two-stage model of numerosity-time interactions whereby the interaction at the perceptual level occurs within the parietal region and the interaction at categorical decisions takes place in the prefrontal cortex

    Recognizing Speech in a Novel Accent: The Motor Theory of Speech Perception Reframed

    Get PDF
    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory

    Task-Specific Codes for Face Recognition: How they Shape the Neural Representation of Features for Detection and Individuation

    Get PDF
    The variety of ways in which faces are categorized makes face recognition challenging for both synthetic and biological vision systems. Here we focus on two face processing tasks, detection and individuation, and explore whether differences in task demands lead to differences both in the features most effective for automatic recognition and in the featural codes recruited by neural processing.Our study appeals to a computational framework characterizing the features representing object categories as sets of overlapping image fragments. Within this framework, we assess the extent to which task-relevant information differs across image fragments. Based on objective differences we find among task-specific representations, we test the sensitivity of the human visual system to these different face descriptions independently of one another. Both behavior and functional magnetic resonance imaging reveal effects elicited by objective task-specific levels of information. Behaviorally, recognition performance with image fragments improves with increasing task-specific information carried by different face fragments. Neurally, this sensitivity to the two tasks manifests as differential localization of neural responses across the ventral visual pathway. Fragments diagnostic for detection evoke larger neural responses than non-diagnostic ones in the right posterior fusiform gyrus and bilaterally in the inferior occipital gyrus. In contrast, fragments diagnostic for individuation evoke larger responses than non-diagnostic ones in the anterior inferior temporal gyrus. Finally, for individuation only, pattern analysis reveals sensitivity to task-specific information within the right "fusiform face area".OUR RESULTS DEMONSTRATE: 1) information diagnostic for face detection and individuation is roughly separable; 2) the human visual system is independently sensitive to both types of information; 3) neural responses differ according to the type of task-relevant information considered. More generally, these findings provide evidence for the computational utility and the neural validity of fragment-based visual representation and recognition

    Temporal isolation of neural processes underlying face preference decisions

    Get PDF
    Decisions about whether we like someone are often made so rapidly from first impressions that it is difficult to examine the engagement of neural structures at specific points in time. Here, we used a temporally extended decision-making paradigm to examine brain activation with functional MRI (fMRI) at sequential stages of the decision-making process. Activity in reward-related brain structures—the nucleus accumbens (NAC) and orbitofrontal cortex (OFC)—was found to occur at temporally dissociable phases while subjects decided which of two unfamiliar faces they preferred. Increases in activation in the OFC occurred late in the trial, consistent with a role for this area in computing the decision of which face to choose. Signal increases in the NAC occurred early in the trial, consistent with a role for this area in initial preference formation. Moreover, early signal increases in the NAC also occurred while subjects performed a control task (judging face roundness) when these data were analyzed on the basis of which of those faces were subsequently chosen as preferred in a later task. The findings support a model in which rapid, automatic engagement of the NAC conveys a preference signal to the OFC, which in turn is used to guide choice

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe
    • 

    corecore