9 research outputs found

    Vision : a model to study cognition

    Get PDF
    Our senses – vision, audition, touch, taste and smell – constantly receive a large amount of information. This information is processed and used in order to guide our actions. Cognitive sciences consist in studying mental abilities through different disciplines, e.g. linguistic, neuropsychology, neuroscience or modelling. Each discipline considers mental phenomena and their physical substrate, the nervous system, as a tool to process information in order to guide behavior adaptively (Collins, Andler, & Tallon-Baudry, 2018). Cognitive functions are a collection of processing systems serving different goals, and whose interactions are key to the complexity of cognition. Studying cognition often implies operationalizing each of these functions separately. For example, memory allows to store and reuse information, and attention allows to select relevant information for the task at hand, and to facilitate its processing. To characterize the processes of specific cognitive functions, it is thus necessary to provide to the studied subject – here we concentrate on human and non-human primates – an information to be processed, through different sensory modalities. In this essay, we concentrate on vision as a unique model to study cognition through different fields of cognitive sciences, from cognitive psychology to neurosciences, mentioning also briefly modeling and neuropsychology. Our objective is not to do an exhaustive description of the visual system, nor to compare in detail vision with other sensory modalities, but to argue that the accumulation of evidence on the visual system, as well as its characteristic perceptual, algorithmic and physiological organization, make it a particularly rich model to study cognitive functions. After a brief presentation of some properties of vision, we will illustrate our argument focusing on a specific cognitive function: attention, and in particular its study in cognitive psychology and neuroscience. We will discuss how our knowledge of vision allowed us to understand the behavioral and neuronal mechanisms underlying attentional selection and facilitation of information. We will finally conclude that sensory systems can be used as models to study cognition in different fields of cognitive sciences.Nos différents sens−la vue, l’audition, le toucher, le goût, l’odorat− reçoivent constamment un flux massif d’informations. Toutes ces informations sont traitées et utilisées afin de guider nos actions. Les sciences cognitives représentent l’étude de ces facultés mentales par le prisme de différentes disciplines, par exemple linguistique, neuropsychologie, neuroscience ou modélisation. Chacune de ces disciplines considère les phénomènes mentaux et leur substrat physique, le système nerveux, comme un outil de traitement de l’information ayant pour but de guider le comportement de façon adaptative (Collins, Andler, & Tallon-Baudry, 2018). Les fonctions cognitives constituent ainsi une collection de systèmes de traitement de l'information servant différents buts, et dont les interactions sont à l’origine de la complexité de la cognition. L ’ étude de la cognition passe souvent par l’opérationnalisation de chacune de ces fonctions séparément. Par exemple, la mémoire permet de stocker et de réutiliser l’information, et l’attention permet de sélectionner celle qui est pertinente pour la tâche à effectuer, et d’en faciliter son traitement. Afin de caractériser les processus propres à une fonction cognitive donnée, il est alors nécessaire de fournir au sujet d’étude − ici nous nous concentrerons sur le primate humain et non-humain − une information à traiter, via différentes modalités sensorielles. Dans cet article d’opinion, nous nous concentrons sur la vision comme modèle d’étude singulier de la cognition à travers différents champs des sciences cognitives, de la psychologie cognitive aux neurosciences, en passant brièvement par la modélisation et la neuropsychologie. Notre objectif n’est pas de faire une description exhaustive de la modalité visuelle ni de faire une comparaison détaillée avec les autres modalités sensorielles, mais d’argumenter que l’accumulation des connaissances que nous en avons, ainsi que son organisation caractéristique du point de vue perceptif, algorithmique et physiologique, en font un modèle particulièrement riche de l’étude des fonctions cognitives. Après une brève présentation de certaines bases de la vision, nous illustrerons notre argument en nous concentrant sur une fonction cognitive spécifique : l’attention, et en particulier, son étude en psychologie cognitive et neurosciences. Nous aborderons notamment la façon grâce à laquelle nos connaissances sur la vision nous ont permis de comprendre les mécanismes comportementaux et neuronaux qui sous-tendent la sélection de l’information par l’attention, et la facilitation de son traitement. Nous conclurons que les systèmes sensoriels peuvent être utilisés comme modèles d’étude de la cognition dans divers domaines des sciences cognitives

    A Feedback Model of Attention Explains the Diverse Effects of Attention on Neural Firing Rates and Receptive Field Structure.

    No full text
    Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell

    A Feedback Model of Attention Explains the Diverse Effects of Attention on Neural Firing Rates and Receptive Field Structure

    No full text
    <div><p>Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell.</p></div

    The Attentional Suppressive Surround: Eccentricity, Location-Based and Feature-Based Effects and Interactions

    Get PDF
    The Selective Tuning model of visual attention (Tsotsos, 1990) has proposed that the focus of attention is surrounded by an inhibitory zone, eliciting a center-surround attentional distribution. This attentional suppressive surround inhibits irrelevant information which is located close to attended information in physical space (e.g., Cutzu and Tsotsos, 2003; Hopf et al., 2010) or in feature space (e.g., Tombu and Tsotsos, 2008; Störmer and Alvarez, 2014; Bartsch et al., 2017). In Experiment 1, we investigate the interaction between location-based and feature-based surround suppression and hypothesize that the attentional surround suppression would be maximized when spatially adjacent stimuli are also represented closely within a feature map. Our results demonstrate that perceptual discrimination is worst when two similar orientations are presented in proximity to each other, suggesting the interplay of the two surround suppression mechanisms. The Selective Tuning model also predicts that the size of the attentional suppressive surround is determined by the receptive field size of the neuron which optimally processes the attended information. The receptive field size of the processing neurons is tightly associated with stimulus size and eccentricity. Therefore, Experiment 2 tested the hypothesis that the size of the attentional suppressive surround would become larger as stimulus size and eccentricity increase, corresponding to an increase in the neuron's receptive field size. We show that stimulus eccentricity but not stimulus size modulates the size of the attentional suppressive surround. These results are consistent for both low- and high-level features (e.g., orientation and human faces). Overall, the present study supports the existence of the attentional suppressive surround and reveals new properties of this selection mechanism

    La vision : un modèle d'étude de la cognition

    Get PDF
    Our senses-vision, audition, touch, taste and smell-constantly receive a large amount of information. This information is processed and used in order to guide our actions. Cognitive sciences consist in studying mental abilities through different disciplines, e.g. linguistic, neuropsychology or modeling. Each discipline considers mental phenomena and their physical substrate, the nervous system, as a tool to process information in order to guide behavior adaptively (Collins, Andler, & Tallon-Baudry, 2018). Cognitive functions are a collection of processing systems serving different goals. For example, memory allows to store and reuse information, and attention allows to select relevant information for the task at hand, and to facilitate its processing. To characterize the processes of specific cognitive functions, it is thus necessary to provide to the studied subject – here we concentrate on human and non-human primates – an information to be processed, through different sensory modalities. In this opinion piece, we concentrate on vision as a unique model to study cognition through different fields of cognitive sciences, from cognitive psychology to neurosciences, mentioning also briefly modeling and neuropsychology. Our objective is not to do an exhaustive description of the visual system, nor to compare in detail vision with other sensory modalities, but to argue that the accumulation of evidence on the visual system, as well as its characteristic perceptual, algorithmic and physiological organization, make it a particularly rich model to study cognitive functions. After a brief presentation of some vision properties, we will illustrate our argument focusing on a specific cognitive function: attention, and in particular its study in cognitive psychology and neuroscience. We will discuss how our knowledge of vision allowed us to understand the behavioral and neural mechanisms underlying attentional selection and facilitation of information. We will finally conclude that sensory systems can be used as model to study cognition in different fields of cognitive sciences.Nos différents sens − la vue, l'audition, le toucher, le goût, l'odorat − reçoivent constamment un flux massif d'informations. Toutes ces informations sont traitées et utilisées afin de guider nos actions. Les sciences cognitives représentent l'étude de ces facultés mentales par le prisme de différentes disciplines, e.g. linguistique, neuropsychologie ou modélisation. Chacune de ces disciplines considèrent les phénomènes mentaux et leur substrat physique, le système nerveux, comme un outil de traitement de l'information ayant pour but de guider le comportement de façon adaptative (Collins, Andler, & Tallon-Baudry, 2018). Les fonctions cognitives constituent ainsi une collection de systèmes de traitement de l'information servant différents buts. Par exemple, la mémoire permet de stocker et de réutiliser l'information, et l'attention permet de sélectionner celle qui est pertinente pour la tâche à effectuer, et d'en faciliter son traitement. Afin de caractériser les processus propres à une fonction cognitive donnée, il est alors nécessaire de fournir au sujet d'étude − ici nous nous concentrerons sur le primate humain et non-humain − une information à traiter, via différentes modalités sensorielles. Dans cet article d'opinion, nous nous concentrons sur la vision comme modèle d'étude singulier de la cognition à travers différents champs des sciences cognitives, de la psychologie cognitive aux neurosciences, en passant brièvement par la modélisation et la neuropsychologie. Notre objectif n'est pas de faire une description exhaustive de la modalité visuelle ni de faire une comparaison détaillée avec les autres modalités sensorielles, mais d'argumenter que l'accumulation des connaissances que nous en avons, ainsi que son organisation caractéristique du point de vue perceptif, algorithmique et physiologique, en font un modèle particulièrement riche de l'étude des fonctions cognitives. Après une brève présentation de certaines bases de la vision, nous illustrerons notre argument en nous concentrant sur une fonction cognitive spécifique : l'attention, et en particulier, son étude en psychologie cognitive et neurosciences. Nous aborderons notamment la façon grâce à laquelle nos connaissances sur la vision nous ont permis de comprendre les mécanismes comportementaux et neuraux sous-tendant la sélection de l'information par l'attention, et la facilitation de son traitement. Nous conclurons que les systèmes sensoriels peuvent être utilisés comme modèles d'étude de la cognition dans divers domaines des sciences cognitives

    Experimental Evidence for Top-Down Attentional Selection in the Selective Tuning Model of Visual Attention

    Get PDF
    To overcome limited processing capacity, our visual system facilitates information that relates to the task at hand while inhibiting irrelevant information via selective attention. Among various attention models and theories, the Selective Tuning model of visual attention (ST) is a computation model of visual processing that is based on biological mechanisms. This model emphasizes the role of top-down feedback processing in visual perception and has predicted its unique consequences, such as an attentional surround suppression in which the attentional focus is accompanied by an inhibitory surround. The previous studies have experimentally validated STs predictions, indicating that the components in ST do reflect actual visual processing in the brain. Nevertheless, many aspects of ST still need to be elaborated and several predictions and assumptions remain untested. The series of works in this dissertation investigate different aspects of top-down feedback processing in visual perception that ST has proposed to corroborate this model and to broaden our understanding of visual attention. The first study examined whether top-down feedback processing is necessary for an attention-demanding, fine-grained visual localization (Chapter 2). The subsequent two studies focused on the properties of different types of the attentional surround suppression, the end-result of top-down feedback processing. The second study suggested the interplay between the location-based and feature-based surround suppression and tested the potential factors that could manipulate the spatial extent of the location-based suppressive surround (Chapter 3). The last study demonstrated feature-based surround suppression in motion processing and its neurophysiological mechanism (Chapter 4). Collectively, this work reinforces functional significance of top-down, attention-mediated feedback for visual processing and supports the validity of ST as well
    corecore