1,594 research outputs found
Recommended from our members
Dynamic information processing states revealed through neurocognitive models of object semantics.
Recognising objects relies on highly dynamic, interactive brain networks to process multiple aspects of object information. To fully understand how different forms of information about objects are represented and processed in the brain requires a neurocognitive account of visual object recognition that combines a detailed cognitive model of semantic knowledge with a neurobiological model of visual object processing. Here we ask how specific cognitive factors are instantiated in our mental processes and how they dynamically evolve over time. We suggest that coarse semantic information, based on generic shared semantic knowledge, is rapidly extracted from visual inputs and is sufficient to drive rapid category decisions. Subsequent recurrent neural activity between the anterior temporal lobe and posterior fusiform supports the formation of object-specific semantic representations - a conjunctive process primarily driven by the perirhinal cortex. These object-specific representations require the integration of shared and distinguishing object properties and support the unique recognition of objects. We conclude that a valuable way of understanding the cognitive activity of the brain is though testing the relationship between specific cognitive measures and dynamic neural activity. This kind of approach allows us to move towards uncovering the information processing states of the brain and how they evolve over time.This is the fnal version. It was first published by Taylor and Francis at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4337742/
From Perception to Conception: How Meaningful Objects Are Processed over Time
To recognize visual objects, our sensory perceptions are transformed through dynamic neural interactions into meaningful representations of the world but exactly how visual inputs invoke object meaning remains unclear. To address this issue, we apply a regression approach to magnetoencephalography data, modeling perceptual and conceptual variables. Key conceptual measures were derived from semantic feature-based models claiming shared features (e.g., has eyes) provide broad category information, while distinctive features (e.g., has a hump) are additionally required for more specific object identification. Our results show initial perceptual effects in visual cortex that are rapidly followed by semantic feature effects throughout ventral temporal cortex within the first 120 ms. Moreover, these early semantic effects reflect shared semantic feature information supporting coarse category-type distinctions. Post-200 ms, we observed the effects along the extent of ventral temporal cortex for both shared and distinctive features, which together allow for conceptual differentiation and object identification. By relating spatiotemporal neural activity to statistical feature-based measures of semantic knowledge, we demonstrate that qualitatively different kinds of perceptual and semantic information are extracted from visual objects over time, with rapid activation of shared object features followed by concomitant activation of distinctive features that together enable meaningful visual object recognitio
The Timing of Visual Object Categorization
An object can be categorized at different levels of abstraction: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Visuospatial coding as ubiquitous scaffolding for human cognition
For more than 100 years we have known that the visual field is mapped onto the surface of visual cortex, imposing an inherently spatial reference frame on visual information processing. Recent studies highlight visuospatial coding not only throughout visual cortex, but also brain areas not typically considered visual. Such widespread access to visuospatial coding raises important questions about its role in wider cognitive functioning. Here, we synthesise these recent developments and propose that visuospatial coding scaffolds human cognition by providing a reference frame through which neural computations interface with environmental statistics and task demands via perception–action loops
Jeter un regard sur une phase précoce des traitements visuels
L'objectif de cette thèse a été d'étudier la dynamique des traitements cognitifs permettant la reconnaissance rapide d'objets dans les scènes naturelles. Afin d'obtenir des réponses comportementales précoces, nous avons utilisé un protocole de choix saccadique, dans lequel les sujets devaient diriger leur regard le plus rapidement possible vers l'image contenant l'objet cible parmi deux images affichées à l'écran. Ce protocole a d'abord permis de mettre en évidence des différences de temps de traitement entre les catégories d'objets, avec un avantage particulier pour la détection des visages humains. En effet, lorsque ceux-ci sont utilisés comme cible, les premières saccades sélectives apparaissent dès 100 ms ! Nous nous sommes donc intéressés aux mécanismes permettant une détection aussi rapide et avons montré qu'un attribut bas-niveau pourrait être utilisé pour détecter et localiser les visages dans notre champ visuel en une fraction de seconde. Afin de mieux comprendre la nature des représentations précoces mises en jeu, nous avons mené deux nouvelles études qui nous ont permis de montrer que les saccades les plus rapides ne seraient pas influencées par les informations contextuelles, et seraient basées sur une information rudimentaire. Enfin, j'ai proposé un modèle simple de décision, basé sur des différences de temps de traitement neuronal entre catégories, qui permet de reproduire fidèlement nos résultats expérimentaux. L'ensemble de ces résultats, mis en perspective avec les connaissances actuelles sur les bases neuronales de la reconnaissance d'objet, démontre que le protocole de choix saccadique, en donnant accès à une fenêtre temporelle inaccessible jusqu'alors par les études comportementales, s'avère un outil de choix pour les recherches à venir sur la reconnaissance rapide d'objets.The aim of this thesis is to investigate the dynamics of the cognitive processing involved in rapid object recognition in natural scenes. In order to get the fastest behavioral responses, we used a saccadic choice task in which subjects had to initiate saccades as fast as possible toward the image containing the target among two images displayed at the same time on the screen. This protocol first revealed differences in processing times between categories, with an advantage for the detection of human faces. Indeed, when human faces were used as the target, the first selective saccades appeared as early as 100 ms after the apparition of the images! We were thus interested in the mechanisms allowing such fast detection and showed that a low-level attribute might be used to detect and locate faces in the visual field. In order to understand the nature of the early representation used, we designed two other studies which showed that the fastest saccades were not influenced by contextual information, and were based on relatively coarse information. Finally, I present a simple decision model, based on a latency difference between neuronal population, which accounts for our experimental results. These results, taken in the perspective of what is known about the neural basis of object recognition, showed that the saccadic choice task, allowing access to an early temporal window, will be a very useful tool of interest for future studies on rapid object recognition
- …