9,555 research outputs found

    The neurocognition of syntactic processing

    Get PDF

    Task-Specific Codes for Face Recognition: How they Shape the Neural Representation of Features for Detection and Individuation

    Get PDF
    The variety of ways in which faces are categorized makes face recognition challenging for both synthetic and biological vision systems. Here we focus on two face processing tasks, detection and individuation, and explore whether differences in task demands lead to differences both in the features most effective for automatic recognition and in the featural codes recruited by neural processing.Our study appeals to a computational framework characterizing the features representing object categories as sets of overlapping image fragments. Within this framework, we assess the extent to which task-relevant information differs across image fragments. Based on objective differences we find among task-specific representations, we test the sensitivity of the human visual system to these different face descriptions independently of one another. Both behavior and functional magnetic resonance imaging reveal effects elicited by objective task-specific levels of information. Behaviorally, recognition performance with image fragments improves with increasing task-specific information carried by different face fragments. Neurally, this sensitivity to the two tasks manifests as differential localization of neural responses across the ventral visual pathway. Fragments diagnostic for detection evoke larger neural responses than non-diagnostic ones in the right posterior fusiform gyrus and bilaterally in the inferior occipital gyrus. In contrast, fragments diagnostic for individuation evoke larger responses than non-diagnostic ones in the anterior inferior temporal gyrus. Finally, for individuation only, pattern analysis reveals sensitivity to task-specific information within the right "fusiform face area".OUR RESULTS DEMONSTRATE: 1) information diagnostic for face detection and individuation is roughly separable; 2) the human visual system is independently sensitive to both types of information; 3) neural responses differ according to the type of task-relevant information considered. More generally, these findings provide evidence for the computational utility and the neural validity of fragment-based visual representation and recognition

    Distributed Activity Patterns for Objects and Their Features: Decoding Perceptual and Conceptual Object Processing in Information Networks of the Human Brain

    Get PDF
    How are object features and knowledge-fragments represented and bound together in the human brain? Distributed patterns of activity within brain regions can encode distinctions between perceptual and cognitive phenomena with impressive specificity. The research reported here investigated how the information within regions\u27 multi-voxel patterns is combined in object-concept networks. Chapter 2 investigated how memory-driven activity patterns for an object\u27s specific shape, color, and identity become active at different stages of the visual hierarchy. Brain activity patterns were recorded with functional magnetic resonance imaging (fMRI) as participants searched for specific fruits or vegetables within visual noise. During time-points in which participants were searching for an object, but viewing pure noise, the targeted object\u27s identity could be decoded in the left anterior temporal lobe (ATL). In contrast, top-down generated patterns for the object\u27s specific shape and color were decoded in early visual regions. The emergence of object-identity information in the left ATL was predicted by concurrent shape and color information in their respective featural regions. These findings are consistent with theories proposing that feature-fragments in sensory cortices converge to higher-level identity representations in convergence zones. Chapter 3 investigated whether brain regions share fluctuations in multi-voxel information across time. A new analysis method was first developed, to measure dynamic changes in distributed pattern information. This method, termed informational connectivity (IC), was then applied to data collected as participants viewed different types of man-made objects. IC identified connectivity between object-processing regions that was not apparent from existing functional connectivity measures, which track fluctuating univariate signals. Collectively, this work suggests that networks of regions support perceptual and conceptual object processing through the convergence and synchrony of distributed pattern information

    What does semantic tiling of the cortex tell us about semantics?

    Get PDF
    Recent use of voxel-wise modeling in cognitive neuroscience suggests that semantic maps tile the cortex. Although this impressive research establishes distributed cortical areas active during the conceptual processing that underlies semantics, it tells us little about the nature of this processing. While mapping concepts between Marr's computational and implementation levels to support neural encoding and decoding, this approach ignores Marr's algorithmic level, central for understanding the mechanisms that implement cognition, in general, and conceptual processing, in particular. Following decades of research in cognitive science and neuroscience, what do we know so far about the representation and processing mechanisms that implement conceptual abilities? Most basically, much is known about the mechanisms associated with: (1) features and frame representations, (2) grounded, abstract, and linguistic representations, (3) knowledge-based inference, (4) concept composition, and (5) conceptual flexibility. Rather than explaining these fundamental representation and processing mechanisms, semantic tiles simply provide a trace of their activity over a relatively short time period within a specific learning context. Establishing the mechanisms that implement conceptual processing in the brain will require more than mapping it to cortical (and sub-cortical) activity, with process models from cognitive science likely to play central roles in specifying the intervening mechanisms. More generally, neuroscience will not achieve its basic goals until it establishes algorithmic-level mechanisms that contribute essential explanations to how the brain works, going beyond simply establishing the brain areas that respond to various task conditions

    Fragment-Based Learning of Visual Object Categories in Non-Human Primates

    Get PDF
    When we perceive a visual object, we implicitly or explicitly associate it with an object category we know. Recent research has shown that the visual system can use local, informative image fragments of a given object, rather than the whole object, to classify it into a familiar category. We have previously reported, using human psychophysical studies, that when subjects learn new object categories using whole objects, they incidentally learn informative fragments, even when not required to do so. However, the neuronal mechanisms by which we acquire and use informative fragments, as well as category knowledge itself, have remained unclear. Here we describe the methods by which we adapted the relevant human psychophysical methods to awake, behaving monkeys and replicated key previous psychophysical results. This establishes awake, behaving monkeys as a useful system for future neurophysiological studies not only of informative fragments in particular, but also of object categorization and category learning in general

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Model Cortical Association Fields Account for the Time Course and Dependence on Target Complexity of Human Contour Perception

    Get PDF
    Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas

    Detection of Fir Trees (Abies sibirica) Damaged by the Bark Beetle in Unmanned Aerial Vehicle Images with Deep Learning

    Get PDF
    We are very grateful to the reviewers for their valuable comments that helped to improve the paper. We appreciate the support of a vice-director of the “Stolby” State Nature Reserve, Anastasia Knorre. We also thank two Ph.D. students Egor Trukhanov and Anton Perunov from Siberian Federal University for their help in data acquisition (aerial photography from UAV) on two research plots in 2016 and raw imagery processing.Invasion of the Polygraphus proximus Blandford bark beetle causes catastrophic damage to forests with firs (Abies sibirica Ledeb) in Russia, especially in Central Siberia. Determining tree damage stage based on the shape, texture and colour of tree crown in unmanned aerial vehicle (UAV) images could help to assess forest health in a faster and cheaper way. However, this task is challenging since (i) fir trees at different damage stages coexist and overlap in the canopy, (ii) the distribution of fir trees in nature is irregular and hence distinguishing between different crowns is hard, even for the human eye. Motivated by the latest advances in computer vision and machine learning, this work proposes a two-stage solution: In a first stage, we built a detection strategy that finds the regions of the input UAV image that are more likely to contain a crown, in the second stage, we developed a new convolutional neural network (CNN) architecture that predicts the fir tree damage stage in each candidate region. Our experiments show that the proposed approach shows satisfactory results on UAV Red, Green, Blue (RGB) images of forest areas in the state nature reserve “Stolby” (Krasnoyarsk, Russia).A.S. was supported by the grant of the Russian Science Foundation No. 16-11-00007. S.T. was supported by the Ramón y Cajal Programme (No. RYC-2015-18136). S.T. and F.H. received funding from the Spanish Ministry of Science and Technology under the project TIN2017-89517-P. D.A.-S. received support from project ECOPOTENTIAL, which received funding from the European Union Horizon 2020 Research and Innovation Programme under grant agreement No. 641762, from the European LIFE Project ADAPTAMED LIFE14 CCA/ES/000612 and from project 80NSSC18K0446 of the NASA’s Group on Earth Observations Work Programme 2016. A.R. was supported by the grant of the Russian Science Foundation No. 18-74-10048. Y. M. was supported by the grant of Russian Foundation for Basic Research No. 18-47-242002, Government of Krasnoyarsk Territory and Krasnoyarsk Regional Fund of Science
    corecore