11 research outputs found

    Visual selective behavior can be triggered by a feed-forward process

    Get PDF
    The ventral visual pathway implements object recognition and categorization in a hierarchy of processing areas with neuronal selectivities of increasing complexity. The presence of massive feedback connections within this hierarchy raises the possibility that normal visual processing relies on the use of computational loops. It is not known, however, whether object recognition can be performed at all without such loops (i.e., in a purely feed-forward mode). By analyzing the time course of reaction times in a masked natural scene categorization paradigm, we show that the human visual system can generate selective motor responses based on a single feed-forward pass. We confirm these results using a more constrained letter discrimination task, in which the rapid succession of a target and mask is actually perceived as a distractor. We show that a masked stimulus presented for only 26 msec—and often not consciously perceived—can fully determine the earliest selective motor responses: The neural representations of the stimulus and mask are thus kept separated during a short period corresponding to the feed-forward "sweep." Therefore, feedback loops do not appear to be "mandatory" for visual processing. Rather, we found that such loops allow the masked stimulus to reverberate in the visual system and affect behavior for nearly 150 msec after the feed-forward sweep

    The power of the feed-forward sweep

    Get PDF
    Vision is fast and efficient. A novel natural scene can be categorized (e.g. does it contain an animal, a vehicle?) by human observers in less than 150 ms, and with minimal attentional resources. This ability still holds under strong backward masking conditions. In fact, with a stimulus onset asynchrony of about 30 ms (the time between the scene and mask onset), the first 30 ms of selective behavioral responses are essentially unaffected by the presence of the mask, suggesting that this type of “ultra-rapid” processing can rely on a sequence of swift feed-forward stages, in which the mask information never “catches up” with the scene information. Simulations show that the feed-forward propagation of the first wave of spikes generated at stimulus onset may indeed suffice for crude re-cognition or categorization. Scene awareness, however, may take significantly more time to develop, and probably requires feed-back processes. The main implication of these results for theories of masking is that pattern or metacontrast (backward) masking do not appear to bar the progression of visual information at a low level. These ideas bear interesting similarities to existing conceptualizations of priming and masking, such as Direct Parameter Specification or the Rapid Chase theory

    Spiking Dynamics during Perceptual Grouping in the Laminar Circuits of Visual Cortex

    Full text link
    Grouping of collinear boundary contours is a fundamental process during visual perception. Illusory contour completion vividly illustrates how stable perceptual boundaries interpolate between pairs of contour inducers, but do not extrapolate from a single inducer. Neural models have simulated how perceptual grouping occurs in laminar visual cortical circuits. These models predicted the existence of grouping cells that obey a bipole property whereby grouping can occur inwardly between pairs or greater numbers of similarly oriented and co-axial inducers, but not outwardly from individual inducers. These models have not, however, incorporated spiking dynamics. Perceptual grouping is a challenge for spiking cells because its properties of collinear facilitation and analog sensitivity to inducer configurations occur despite irregularities in spike timing across all the interacting cells. Other models have demonstrated spiking dynamics in laminar neocortical circuits, but not how perceptual grouping occurs. The current model begins to unify these two modeling streams by implementing a laminar cortical network of spiking cells whose intracellular temporal dynamics interact with recurrent intercellular spiking interactions to quantitatively simulate data from neurophysiological experiments about perceptual grouping, the structure of non-classical visual receptive fields, and gamma oscillations.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001); Defense Advanced Research Project Agency (HR001-09-C-0011

    Vision : a model to study cognition

    Get PDF
    Our senses – vision, audition, touch, taste and smell – constantly receive a large amount of information. This information is processed and used in order to guide our actions. Cognitive sciences consist in studying mental abilities through different disciplines, e.g. linguistic, neuropsychology, neuroscience or modelling. Each discipline considers mental phenomena and their physical substrate, the nervous system, as a tool to process information in order to guide behavior adaptively (Collins, Andler, & Tallon-Baudry, 2018). Cognitive functions are a collection of processing systems serving different goals, and whose interactions are key to the complexity of cognition. Studying cognition often implies operationalizing each of these functions separately. For example, memory allows to store and reuse information, and attention allows to select relevant information for the task at hand, and to facilitate its processing. To characterize the processes of specific cognitive functions, it is thus necessary to provide to the studied subject – here we concentrate on human and non-human primates – an information to be processed, through different sensory modalities. In this essay, we concentrate on vision as a unique model to study cognition through different fields of cognitive sciences, from cognitive psychology to neurosciences, mentioning also briefly modeling and neuropsychology. Our objective is not to do an exhaustive description of the visual system, nor to compare in detail vision with other sensory modalities, but to argue that the accumulation of evidence on the visual system, as well as its characteristic perceptual, algorithmic and physiological organization, make it a particularly rich model to study cognitive functions. After a brief presentation of some properties of vision, we will illustrate our argument focusing on a specific cognitive function: attention, and in particular its study in cognitive psychology and neuroscience. We will discuss how our knowledge of vision allowed us to understand the behavioral and neuronal mechanisms underlying attentional selection and facilitation of information. We will finally conclude that sensory systems can be used as models to study cognition in different fields of cognitive sciences.Nos diffĂ©rents sens−la vue, l’audition, le toucher, le goĂ»t, l’odorat− reçoivent constamment un flux massif d’informations. Toutes ces informations sont traitĂ©es et utilisĂ©es afin de guider nos actions. Les sciences cognitives reprĂ©sentent l’étude de ces facultĂ©s mentales par le prisme de diffĂ©rentes disciplines, par exemple linguistique, neuropsychologie, neuroscience ou modĂ©lisation. Chacune de ces disciplines considĂšre les phĂ©nomĂšnes mentaux et leur substrat physique, le systĂšme nerveux, comme un outil de traitement de l’information ayant pour but de guider le comportement de façon adaptative (Collins, Andler, & Tallon-Baudry, 2018). Les fonctions cognitives constituent ainsi une collection de systĂšmes de traitement de l'information servant diffĂ©rents buts, et dont les interactions sont Ă  l’origine de la complexitĂ© de la cognition. L ’ Ă©tude de la cognition passe souvent par l’opĂ©rationnalisation de chacune de ces fonctions sĂ©parĂ©ment. Par exemple, la mĂ©moire permet de stocker et de rĂ©utiliser l’information, et l’attention permet de sĂ©lectionner celle qui est pertinente pour la tĂąche Ă  effectuer, et d’en faciliter son traitement. Afin de caractĂ©riser les processus propres Ă  une fonction cognitive donnĂ©e, il est alors nĂ©cessaire de fournir au sujet d’étude − ici nous nous concentrerons sur le primate humain et non-humain − une information Ă  traiter, via diffĂ©rentes modalitĂ©s sensorielles. Dans cet article d’opinion, nous nous concentrons sur la vision comme modĂšle d’étude singulier de la cognition Ă  travers diffĂ©rents champs des sciences cognitives, de la psychologie cognitive aux neurosciences, en passant briĂšvement par la modĂ©lisation et la neuropsychologie. Notre objectif n’est pas de faire une description exhaustive de la modalitĂ© visuelle ni de faire une comparaison dĂ©taillĂ©e avec les autres modalitĂ©s sensorielles, mais d’argumenter que l’accumulation des connaissances que nous en avons, ainsi que son organisation caractĂ©ristique du point de vue perceptif, algorithmique et physiologique, en font un modĂšle particuliĂšrement riche de l’étude des fonctions cognitives. AprĂšs une brĂšve prĂ©sentation de certaines bases de la vision, nous illustrerons notre argument en nous concentrant sur une fonction cognitive spĂ©cifique : l’attention, et en particulier, son Ă©tude en psychologie cognitive et neurosciences. Nous aborderons notamment la façon grĂące Ă  laquelle nos connaissances sur la vision nous ont permis de comprendre les mĂ©canismes comportementaux et neuronaux qui sous-tendent la sĂ©lection de l’information par l’attention, et la facilitation de son traitement. Nous conclurons que les systĂšmes sensoriels peuvent ĂȘtre utilisĂ©s comme modĂšles d’étude de la cognition dans divers domaines des sciences cognitives

    La vision : un modĂšle d'Ă©tude de la cognition

    Get PDF
    Our senses-vision, audition, touch, taste and smell-constantly receive a large amount of information. This information is processed and used in order to guide our actions. Cognitive sciences consist in studying mental abilities through different disciplines, e.g. linguistic, neuropsychology or modeling. Each discipline considers mental phenomena and their physical substrate, the nervous system, as a tool to process information in order to guide behavior adaptively (Collins, Andler, & Tallon-Baudry, 2018). Cognitive functions are a collection of processing systems serving different goals. For example, memory allows to store and reuse information, and attention allows to select relevant information for the task at hand, and to facilitate its processing. To characterize the processes of specific cognitive functions, it is thus necessary to provide to the studied subject – here we concentrate on human and non-human primates – an information to be processed, through different sensory modalities. In this opinion piece, we concentrate on vision as a unique model to study cognition through different fields of cognitive sciences, from cognitive psychology to neurosciences, mentioning also briefly modeling and neuropsychology. Our objective is not to do an exhaustive description of the visual system, nor to compare in detail vision with other sensory modalities, but to argue that the accumulation of evidence on the visual system, as well as its characteristic perceptual, algorithmic and physiological organization, make it a particularly rich model to study cognitive functions. After a brief presentation of some vision properties, we will illustrate our argument focusing on a specific cognitive function: attention, and in particular its study in cognitive psychology and neuroscience. We will discuss how our knowledge of vision allowed us to understand the behavioral and neural mechanisms underlying attentional selection and facilitation of information. We will finally conclude that sensory systems can be used as model to study cognition in different fields of cognitive sciences.Nos diffĂ©rents sens − la vue, l'audition, le toucher, le goĂ»t, l'odorat − reçoivent constamment un flux massif d'informations. Toutes ces informations sont traitĂ©es et utilisĂ©es afin de guider nos actions. Les sciences cognitives reprĂ©sentent l'Ă©tude de ces facultĂ©s mentales par le prisme de diffĂ©rentes disciplines, e.g. linguistique, neuropsychologie ou modĂ©lisation. Chacune de ces disciplines considĂšrent les phĂ©nomĂšnes mentaux et leur substrat physique, le systĂšme nerveux, comme un outil de traitement de l'information ayant pour but de guider le comportement de façon adaptative (Collins, Andler, & Tallon-Baudry, 2018). Les fonctions cognitives constituent ainsi une collection de systĂšmes de traitement de l'information servant diffĂ©rents buts. Par exemple, la mĂ©moire permet de stocker et de rĂ©utiliser l'information, et l'attention permet de sĂ©lectionner celle qui est pertinente pour la tĂąche Ă  effectuer, et d'en faciliter son traitement. Afin de caractĂ©riser les processus propres Ă  une fonction cognitive donnĂ©e, il est alors nĂ©cessaire de fournir au sujet d'Ă©tude − ici nous nous concentrerons sur le primate humain et non-humain − une information Ă  traiter, via diffĂ©rentes modalitĂ©s sensorielles. Dans cet article d'opinion, nous nous concentrons sur la vision comme modĂšle d'Ă©tude singulier de la cognition Ă  travers diffĂ©rents champs des sciences cognitives, de la psychologie cognitive aux neurosciences, en passant briĂšvement par la modĂ©lisation et la neuropsychologie. Notre objectif n'est pas de faire une description exhaustive de la modalitĂ© visuelle ni de faire une comparaison dĂ©taillĂ©e avec les autres modalitĂ©s sensorielles, mais d'argumenter que l'accumulation des connaissances que nous en avons, ainsi que son organisation caractĂ©ristique du point de vue perceptif, algorithmique et physiologique, en font un modĂšle particuliĂšrement riche de l'Ă©tude des fonctions cognitives. AprĂšs une brĂšve prĂ©sentation de certaines bases de la vision, nous illustrerons notre argument en nous concentrant sur une fonction cognitive spĂ©cifique : l'attention, et en particulier, son Ă©tude en psychologie cognitive et neurosciences. Nous aborderons notamment la façon grĂące Ă  laquelle nos connaissances sur la vision nous ont permis de comprendre les mĂ©canismes comportementaux et neuraux sous-tendant la sĂ©lection de l'information par l'attention, et la facilitation de son traitement. Nous conclurons que les systĂšmes sensoriels peuvent ĂȘtre utilisĂ©s comme modĂšles d'Ă©tude de la cognition dans divers domaines des sciences cognitives

    Feed-forward contour integration in primary visual cortex based on asynchronous spike propagation. Neurocomputing

    No full text
    Most current models of visual contour integration involve iterative lateral or feed-back interactions among neurons in V1 and V2. However, some forms of visual processing are too fast for such time-consuming loops. We propose a model avoiding iterative computation by using the fact that real neurons in the retina or LGN "re asynchronously, with the most activated "ring "rst. Thus, early "ring V1 neurons can in#uence processing of their neighbors which are still integrating information from LGN. By limiting the number of spikes to one per neuron, we show that contour integration can be obtained in a purely feed-forwar

    Perceptual Learning, Long-Range Horizontal Connections And Top-Down Influences In Primary Visual Cortex

    Get PDF
    The earliest cortical stage of visual processing, the primary visual cortex, has long been seen as a static preprocessor that finds local edges and their orientation like a linear filter bank, and passes this information on to downstream visual areas. This view has been challenged in recent years since the discovery of contextual influences, that is, interactions between the responses of neurons that encode for non-overlapping adjacent areas of visual space, and their anatomical substrate, long-range horizontal connections. These contextual interactions have been shown in awake behaving primates to be modulated depending on the task the animals are performing. A first set of electrophysiological experiments has shown with the help of information theory that when an animal performed one of two tasks on the same visual display, the contextual modulations of the task-relevant parts of the visual display contained more information about the stimulus position than when the same elements were task-irrelevant. A second set of experiments on contour integration was analyzed with ROC analysis to show that an ideal observer could predict the presence of an embedded contour from the spike count of a single neuron on a single trial as well as the animalñ€ℱs behavioral performance. A final set of experiments showed that prior to learning the same contour integration task, the responses did not contain any information about the stimulus position, that the information in the response increased in parallel with the animals performance during learning, and that the enhanced response after learning disappeared during anesthesia, but is only weakened when performing an irrelevant task in a different part of visual space. Last, a neural network is presented that allows gating of long-range horizontal connections by top-down feedback. The stability and the dynamic behavior of the network have been established with phase-plane analysis. Large-scale simulations have been performed to confirm the stability and show the enhanced contour integration of realistic stimuli as a function of feedback gain. This model has fit quantitatively the electrophysiological experiments of contour integration
    corecore