7 research outputs found

    Human visual and parietal cortex encode visual choices independent of motor plans

    No full text
    Perceptual decision-making entails the transformation of graded sensory signals into categorical judgments. Often, there is a direct mapping between these judgments and specific motor responses. However, when stimulus-response mappings are fixed, neural activity underlying decision-making cannot be separated from neural activity reflecting motor planning. Several human neuroimaging studies have reported changes in brain activity associated with perceptual decisions. Nevertheless, to date it has remained unknown where and how specific choices are encoded in the human brain when motor planning is decoupled from the decision process. We addressed this question by having subjects judge the direction of motion of dynamic random dot patterns at various levels of motion strength while measuring their brain activity with fMRI. We used multivariate decoding analyses to search the whole brain for patterns of brain activity encoding subjects' choices. To decouple the decision process from motor planning, subjects were informed about the required motor response only after stimulus presentation. Patterns of fMRI signals in early visual and inferior parietal cortex predicted subjects' perceptual choices irrespective of motor planning. This was true across several levels of motion strength and even in the absence of any coherent stimulus motion. We also found that the cortical distribution of choice-selective brain signals depended on stimulus strength: While visual cortex carried most choice-selective information for strong motion, information in parietal cortex decreased with increasing motion coherence. These results demonstrate that human visual and inferior parietal cortex carry information about the visual decision in a more abstract format than can be explained by simple motor intentions. Both brain regions may be differentially involved in perceptual decision-making in the face of strong and weak sensory evidence

    From photos to sketches - how humans and deep neural networks process objects across different levels of visual abstraction

    No full text
    Line drawings convey meaning with just a few strokes. Despite strong simplifications, humans can recognize objects depicted in such abstracted images without effort. To what degree do deep convolutional neural networks (CNNs) mirror this human ability to generalize to abstracted object images? While CNNs trained on natural images have been shown to exhibit poor classification performance on drawings, other work has demonstrated highly similar latent representations in the networks for abstracted and natural images. Here, we address these seemingly conflicting findings by analyzing the activation patterns of a CNN trained on natural images across a set of photographs, drawings, and sketches of the same objects and comparing them to human behavior. We find a highly similar representational structure across levels of visual abstraction in early and intermediate layers of the network. This similarity, however, does not translate to later stages in the network, resulting in low classification performance for drawings and sketches. We identified that texture bias in CNNs contributes to the dissimilar representational structure in late layers and the poor performance on drawings. Finally, by fine-tuning late network layers with object drawings, we show that performance can be largely restored, demonstrating the general utility of features learned on natural images in early and intermediate layers for the recognition of drawings. In conclusion, generalization to abstracted images, such as drawings, seems to be an emergent property of CNNs trained on natural images, which is, however, suppressed by domain-related biases that arise during later processing stages in the network

    The relationship between perceptual decision variables and confidence in the human brain

    No full text
    Perceptual confidence refers to the degree to which we believe in the accuracy of our percepts. Signal detection theory suggests that perceptual confidence is computed from an internal "decision variable," which reflects the amount of available information in favor of one or another perceptual interpretation of the sensory input. The neural processes underlying these computations have, however, remained elusive. Here, we used fMRI and multivariate decoding techniques to identify regions of the human brain that encode this decision variable and confidence during a visual motion discrimination task. We used observers' binary perceptual choices and confidence ratings to reconstruct the internal decision variable that governed the subjects' behavior. A number of areas in prefrontal and posterior parietal association cortex encoded this decision variable, and activity in the ventral striatum reflected the degree of perceptual confidence. Using a multivariate connectivity analysis, we demonstrate that patterns of brain activity in the right ventrolateral prefrontal cortex reflecting the decision variable were linked to brain signals in the ventral striatum reflecting confidence. Our results suggest that the representation of perceptual confidence in the ventral striatum is derived from a transformation of the continuous decision variable encoded in the cerebral cortex

    Synthesizing preferred stimuli for individual voxels in the human visual system

    No full text
    Investigating the function of the various subregions of the visual system is a major goal in neuroscience. One approach is specifying to which types of stimuli they show the strongest response to, however given the variety of the visual world it is impossible to present all possible stimuli in-vivo. We follow an alternative approach to reach this goal. We trained a convolutional neural network-based model of the occipitotemporal cortex to match the behaviour of an individual brain reacting to visual input, using a large-scale functional MRI dataset (Seeliger & Sommers 2019). This model allowed us to predict voxel responses in several areas defined functionally (such as FFA, LOC, PPA) and from an anatomical atlas (such as PHC, VO) in-silico. To identify the preferred stimuli for voxels in these areas we developed an interpretability technique for convolutional neural networks, based on a generative adversarial neural network (GAN), and using gradient ascent for synthesizing naturalistic preferred images. As expected, voxels in areas V1-V3 yielded small receptive fields with a preference for gratings. Higher order areas showed mixed preference: For instance, while confirming face-selectivity in FFA and place-selectivity in PPA, both regions additionally showed preference for other visual features, such as oval shapes and vertical lines in FFA, or horizontal lines and high spatial frequency in PPA. The GAN latent vectors for the investigated subregions were highly distinct, as shown in classification tasks, underscoring the validity of the results. This approach opens the avenue towards precision functional mapping of selectivity at the level of individual voxels across the whole visual system, and may reveal previously unknown functional selectivity
    corecore