3,029 research outputs found

    Perception of categories: from coding efficiency to reaction times

    Full text link
    Reaction-times in perceptual tasks are the subject of many experimental and theoretical studies. With the neural decision making process as main focus, most of these works concern discrete (typically binary) choice tasks, implying the identification of the stimulus as an exemplar of a category. Here we address issues specific to the perception of categories (e.g. vowels, familiar faces, ...), making a clear distinction between identifying a category (an element of a discrete set) and estimating a continuous parameter (such as a direction). We exhibit a link between optimal Bayesian decoding and coding efficiency, the latter being measured by the mutual information between the discrete category set and the neural activity. We characterize the properties of the best estimator of the likelihood of the category, when this estimator takes its inputs from a large population of stimulus-specific coding cells. Adopting the diffusion-to-bound approach to model the decisional process, this allows to relate analytically the bias and variance of the diffusion process underlying decision making to macroscopic quantities that are behaviorally measurable. A major consequence is the existence of a quantitative link between reaction times and discrimination accuracy. The resulting analytical expression of mean reaction times during an identification task accounts for empirical facts, both qualitatively (e.g. more time is needed to identify a category from a stimulus at the boundary compared to a stimulus lying within a category), and quantitatively (working on published experimental data on phoneme identification tasks)

    Cognitive effort modulates connectivity between dorsal anterior cingulate cortex and task-relevant cortical areas

    Get PDF
    Investment of cognitive effort is required in everyday life and has received ample attention in recent neurocognitive frameworks. The neural mechanism of effort investment is thought to be structured hierarchically, with dorsal anterior cingulate cortex (dACC) at the highest level, recruiting task-specific upstream areas. In the current fMRI study, we tested whether dACC is generally active when effort demand is high across tasks with different stimuli, and whether connectivity between dACC and task-specific areas is increased depending on the task requirements and effort level at hand. For that purpose, a perceptual detection task was administered that required male and female human participants to detect either a face or a house in a noisy image. Effort demand was manipulated by adding little (low effort) or much (high effort) noise to the images. Results showed a network of dACC, anterior insula (AI), and intraparietal sulcus (IPS) to be more active when effort demand was high, independent of the performed task (face or house detection). Importantly, effort demand modulated functional connectivity between dACC and face-responsive or house-responsive perceptual areas, depending on the task at hand. This shows that dACC, AI, and IPS constitute a general effort-responsive network and suggests that the neural implementation of cognitive effort involves dACC-initiated sensitization of task-relevant areas

    Evidence accumulation and the moment of recognition: dissociating perceptual recognition processes using fMRI

    Get PDF
    Decision making can be conceptualized as the culmination of an integrative process in which evidence supporting different response options accumulates gradually over time. We used functional magnetic resonance imaging to investigate brain activity leading up to and during decisions about perceptual object identity. Pictures were revealed gradually and subjects signaled the time of recognition (TR) with a button press. We examined the time course of TR-dependent activity to determine how brain regions tracked the timing of recognition. In several occipital regions, activity increased primarily as stimulus information increased, suggesting a role in lower-level sensory processing. In inferior temporal, frontal, and parietal regions, a gradual buildup in activity peaking in correspondence with TR suggested that these regions participated in the accumulation of evidence supporting object identity. In medial frontal cortex, anterior insula/frontal operculum, and thalamus, activity remained near baseline until TR, suggesting a relation to the moment of recognition or the decision itself. The findings dissociate neural processes that function in concert during perceptual recognition decisions

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300

    Memory accumulation mechanisms in human cortex are independent of motor intentions

    Get PDF
    Previous studies on perceptual decision-making have often emphasized a tight link between decisions and motor intentions. Human decisions, however, also depend on memories or experiences that are not closely tied to specific motor responses. Recent neuroimaging findings have suggested that, during episodic retrieval, parietal activity reflects the accumulation of evidence for memory decisions. It is currently unknown, however, whether these evidence accumulation signals are functionally linked to signals for motor intentions coded in frontoparietal regions and whether activity in the putative memory accumulator tracks the amount of evidence for only previous experience, as reflected in "old" reports, or for both old and new decisions, as reflected in the accuracy of memory judgments. Here, human participants used saccadic-eye and hand-pointing movements to report recognition judgments on pictures defined by different degrees of evidence for old or new decisions. A set of cortical regions, including the middle intraparietal sulcus, showed a monotonic variation of the fMRI BOLD signal that scaled with perceived memory strength (older > newer), compatible with an asymmetrical memory accumulator. Another set, including the hippocampus and the angular gyrus, showed a nonmonotonic response profile tracking memory accuracy (higher > lower evidence), compatible with a symmetrical accumulator. In contrast, eye and hand effector-specific regions in frontoparietal cortex tracked motor intentions but were not modulated by the amount of evidence for the effector outcome. We conclude that item recognition decisions are supported by a combination of symmetrical and asymmetrical accumulation signals largely segregated from motor intentions

    Language bias in visually driven decisions: Computational neurophysiological mechanisms

    Get PDF

    How active perception and attractor dynamics shape perceptual categorization: A computational model

    Get PDF
    We propose a computational model of perceptual categorization that fuses elements of grounded and sensorimotor theories of cognition with dynamic models of decision-making. We assume that category information consists in anticipated patterns of agent–environment interactions that can be elicited through overt or covert (simulated) eye movements, object manipulation, etc. This information is firstly encoded when category information is acquired, and then re-enacted during perceptual categorization. The perceptual categorization consists in a dynamic competition between attractors that encode the sensorimotor patterns typical of each category; action prediction success counts as ‘‘evidence’’ for a given category and contributes to falling into the corresponding attractor. The evidence accumulation process is guided by an active perception loop, and the active exploration of objects (e.g., visual exploration) aims at eliciting expected sensorimotor patterns that count as evidence for the object category. We present a computational model incorporating these elements and describing action prediction, active perception, and attractor dynamics as key elements of perceptual categorizations. We test the model in three simulated perceptual categorization tasks, and we discuss its relevance for grounded and sensorimotor theories of cognition.Peer reviewe
    • …
    corecore