178 research outputs found

    A Neural Representation of Prior Information during Perceptual Inference

    Get PDF
    SummaryPerceptual inference is biased by foreknowledge about what is probable or possible. How prior expectations are neurally represented during visual perception, however, remains unknown. We used functional magnetic resonance imaging to measure brain activity in humans judging simple visual stimuli. Perceptual decisions were either biased in favor of a single alternative (A/∼A decisions) or taken without bias toward either choice (A/B decisions). Extrastriate and anterior temporal lobe regions were more active during A/∼A than A/B decisions, suggesting multiple representations of prior expectations within the visual hierarchy. Forward connectivity was increased when expected and observed perception diverged (“prediction error” signals), whereas prior expectations fed backward from higher to lower regions. Finally, the coincidence between expected and observed perception activated orbital prefrontal regions, perhaps reflecting the reinforcement of prior expectations. These data support computational and quantitative models proposing that a visual percept emerges from converging bottom-up and top-down signals

    Building Bridges between Perceptual and Economic Decision-Making: Neural and Computational Mechanisms

    Get PDF
    Investigation into the neural and computational bases of decision-making has proceeded in two parallel but distinct streams. Perceptual decision-making (PDM) is concerned with how observers detect, discriminate, and categorize noisy sensory information. Economic decision-making (EDM) explores how options are selected on the basis of their reinforcement history. Traditionally, the sub-fields of PDM and EDM have employed different paradigms, proposed different mechanistic models, explored different brain regions, disagreed about whether decisions approach optimality. Nevertheless, we argue that there is a common framework for understanding decisions made in both tasks, under which an agent has to combine sensory information (what is the stimulus) with value information (what is it worth). We review computational models of the decision process typically used in PDM, based around the idea that decisions involve a serial integration of evidence, and assess their applicability to decisions between good and gambles. Subsequently, we consider the contribution of three key brain regions – the parietal cortex, the basal ganglia, and the orbitofrontal cortex (OFC) – to perceptual and EDM, with a focus on the mechanisms by which sensory and reward information are integrated during choice. We find that although the parietal cortex is often implicated in the integration of sensory evidence, there is evidence for its role in encoding the expected value of a decision. Similarly, although much research has emphasized the role of the striatum and OFC in value-guided choices, they may play an important role in categorization of perceptual information. In conclusion, we consider how findings from the two fields might be brought together, in order to move toward a general framework for understanding decision-making in humans and other primates

    Economic Value Biases Uncertain Perceptual Choices in the Parietal and Prefrontal Cortices

    Get PDF
    An observer detecting a noisy sensory signal is biased by the costs and benefits associated with its presence or absence. When these costs and benefits are asymmetric, sensory, and economic information must be integrated to inform the final choice. However, it remains unknown how this information is combined at the neural or computational levels. To address this question, we asked healthy human observers to judge the presence or absence of a noisy sensory signal under economic conditions that favored yes responses (liberal blocks), no responses (conservative blocks), or neither response (neutral blocks). Economic information biased fast choices more than slow choices, suggesting that value and sensory information are integrated early in the decision epoch. More formal simulation analyses using an Ornstein–Uhlenbeck process demonstrated that the influence of economic information was best captured by shifting the origin of evidence accumulation toward the more valuable bound. We then used the computational model to generate trial-by-trial estimates of decision-related evidence that were based on combined sensory and economic information (the decision variable or DV), and regressed these against fMRI activity recorded whilst participants performed the task. Extrastriate visual regions responded to the level of sensory input (momentary evidence), but fMRI signals in the parietal and prefrontal cortices responded to the decision variable. These findings support recent single-neuron data suggesting that economic information biases decision-related signals in higher cortical regions

    If deep learning is the answer, then what is the question?

    Full text link
    Neuroscience research is undergoing a minor revolution. Recent advances in machine learning and artificial intelligence (AI) research have opened up new ways of thinking about neural computation. Many researchers are excited by the possibility that deep neural networks may offer theories of perception, cognition and action for biological brains. This perspective has the potential to radically reshape our approach to understanding neural systems, because the computations performed by deep networks are learned from experience, not endowed by the researcher. If so, how can neuroscientists use deep networks to model and understand biological brains? What is the outlook for neuroscientists who seek to characterise computations or neural codes, or who wish to understand perception, attention, memory, and executive functions? In this Perspective, our goal is to offer a roadmap for systems neuroscience research in the age of deep learning. We discuss the conceptual and methodological challenges of comparing behaviour, learning dynamics, and neural representation in artificial and biological systems. We highlight new research questions that have emerged for neuroscience as a direct consequence of recent advances in machine learning.Comment: 4 Figures, 17 Page

    How do (perceptual) distracters distract?

    Get PDF
    When a target stimulus occurs in the presence of distracters, decisions are less accurate. But how exactly do distracters affect choices? Here, we explored this question using measurement of human behaviour, psychophysical reverse correlation and computational modelling. We contrasted two models: one in which targets and distracters had independent influence on choices (independent model) and one in which distracters modulated choices in a way that depended on their similarity to the target (interaction model). Across three experiments, participants were asked to make fine orientation judgments about the tilt of a target grating presented adjacent to an irrelevant distracter. We found strong evidence for the interaction model, in that decisions were more sensitive when target and distracter were consistent relative to when they were inconsistent. This consistency bias occurred in the frame of reference of the decision, that is, it operated on decision values rather than on sensory signals, and surprisingly, it was independent of spatial attention. A normalization framework, where target features are normalized by the expectation and variability of the local context, successfully captures the observed pattern of results

    How do (perceptual) distracters distract?

    Get PDF
    When a target stimulus occurs in the presence of distracters, decisions are less accurate. But how exactly do distracters affect choices? Here, we explored this question using measurement of human behaviour, psychophysical reverse correlation and computational modelling. We contrasted two models: one in which targets and distracters had independent influence on choices (independent model) and one in which distracters modulated choices in a way that depended on their similarity to the target (interaction model). Across three experiments, participants were asked to make fine orientation judgments about the tilt of a target grating presented adjacent to an irrelevant distracter. We found strong evidence for the interaction model, in that decisions were more sensitive when target and distracter were consistent relative to when they were inconsistent. This consistency bias occurred in the frame of reference of the decision, that is, it operated on decision values rather than on sensory signals, and surprisingly, it was independent of spatial attention. A normalization framework, where target features are normalized by the expectation and variability of the local context, successfully captures the observed pattern of results

    Orthogonal representations for robust context-dependent task performance in brains and neural networks

    Get PDF
    How do neural populations code for multiple, potentially conflicting tasks? Here we used computational simulations involving neural networks to define “lazy” and “rich” coding solutions to this context-dependent decision-making problem, which trade off learning speed for robustness. During lazy learning the input dimensionality is expanded by random projections to the network hidden layer, whereas in rich learning hidden units acquire structured representations that privilege relevant over irrelevant features. For context-dependent decision-making, one rich solution is to project task representations onto low-dimensional and orthogonal manifolds. Using behavioral testing and neuroimaging in humans and analysis of neural signals from macaque prefrontal cortex, we report evidence for neural coding patterns in biological brains whose dimensionality and neural geometry are consistent with the rich learning regime

    Neural knowledge assembly in humans and neural networks

    Get PDF
    Human understanding of the world can change rapidly when new information comes to light, such as when a plot twist occurs in a work of fiction. This flexible "knowledge assembly" requires few-shot reorganization of neural codes for relations among objects and events. However, existing computational theories are largely silent about how this could occur. Here, participants learned a transitive ordering among novel objects within two distinct contexts before exposure to new knowledge that revealed how they were linked. Blood-oxygen-level-dependent (BOLD) signals in dorsal frontoparietal cortical areas revealed that objects were rapidly and dramatically rearranged on the neural manifold after minimal exposure to linking information. We then adapt online stochastic gradient descent to permit similar rapid knowledge assembly in a neural network model

    Are task representations gated in macaque prefrontal cortex?

    Full text link
    A recent paper (Flesch et al, 2022) describes behavioural and neural data suggesting that task representations are gated in the prefrontal cortex in both humans and macaques. This short note proposes an alternative explanation for the reported results from the macaque data
    corecore