35 research outputs found

    Review: Object vision in a structured world

    Get PDF
    In natural vision, objects appear at typical locations, both with respect to visual space (e.g., an airplane in the upper part of a scene) and other objects (e.g., a lamp above a table). Recent studies have shown that object vision is strongly adapted to such positional regularities. In this review we synthesize these developments, highlighting that adaptations to positional regularities facilitate object detection and recognition, and sharpen the representations of objects in visual cortex. These effects are pervasive across various types of high-level content. We posit that adaptations to real-world structure collectively support optimal usage of limited cortical processing resources. Taking positional regularities into account will thus be essential for understanding efficient object vision in the real world

    Object Vision in a Structured World

    Get PDF
    In natural vision, objects appear at typical locations, both with respect to visual space (e.g., an airplane in the upper part of a scene) and other objects (e.g., a lamp above a table). Recent studies have shown that object vision is strongly adapted to such positional regularities. In this review we synthesize these developments, highlighting that adaptations to positional regularities facilitate object detection and recognition, and sharpen the representations of objects in visual cortex. These effects are pervasive across various types of high-level content. We posit that adaptations to real-world structure collectively support optimal usage of limited cortical processing resources. Taking positional regularities into account will thus be essential for understanding efficient object vision in the real world

    Coursera IYSI_Assignment 7.1 Open Science

    No full text

    Contextual and spatial associations between objects interactively modulate visual processing

    Get PDF
    Much of what we know about object recognition arises from the study of isolated objects. In the real world, however, we commonly encounter groups of contextually associated objects (e.g., teacup and saucer), often in stereotypical spatial configurations (e.g., teacup above saucer). Here we used electroencephalography to test whether identity-based associations between objects (e.g., teacup–saucer vs. teacup–stapler) are encoded jointly with their typical relative positioning (e.g., teacup above saucer vs. below saucer). Observers viewed a 2.5-Hz image stream of contextually associated object pairs intermixed with nonassociated pairs as every fourth image. The differential response to nonassociated pairs (measurable at 0.625 Hz in 28/37 participants) served as an index of contextual integration, reflecting the association of object identities in each pair. Over right occipitotemporal sites, this signal was larger for typically positioned object streams, indicating that spatial configuration facilitated the extraction of the objects’ contextual association. This high-level influence of spatial configuration on object identity integration arose ∼ 320 ms post-stimulus onset, with lower-level perceptual grouping (shared with inverted displays) present at ∼ 130 ms. These results demonstrate that contextual and spatial associations between objects interactively influence object processing. We interpret these findings as reflecting the high-level perceptual grouping of objects that frequently co-occur in highly stereotyped relative positions

    The modulatory effects of attention and spatial location on masked face-processing: insights from the reach-to-touch paradigm

    No full text
    Thesis by publication.On title page: Department of Cognitive Science, ARC Centre of Excellence in Cognition and its Disorders, Faculty of Human Sciences, Macquarie University, Sydney, Australia.Includes bibliographical references.In masked priming paradigms, targets are responded to faster and more accurately when preceded by subliminal primes from the same category than a different category. Intriguingly, where these congruence priming effects elicited by word and number stimuli depend on the allocation of attention, masked faces produce priming regardless of how well attention is focused. The research presented in this thesis exploits this unique property to examine the temporal dynamics of nonconscious information processing, and the factors which modulate this hidden cognitive process. Using congruence priming effects for masked faces as an index of nonconscious perception, I present four empirical studies that examine how processing below our level of conscious awareness is affected by manipulations of spatial and temporal attention. In Study 1, I show that the allocation of both spatial and temporal attention facilitates nonconscious processing at less than 350ms of stimulus-processing time. These results suggest that attention modulates nonconscious information processing in a graded fashion that mirrors its influence on the perception of consciously presented stimuli. Study 2 investigates the differential benefit of attention between the vertical hemifields, and documents the breakthrough finding that face-processing is supported better in the upper-hemifield than the lower-hemifield. Study 3 explores whether this upper-hemifield advantage generalises to recognition of a nonface object (human hands). Study 4 investigates and dispels the possibility that the pattern of vertical asymmetry effects for face-perception relates to an upward bias in participants' visuospatial attention. The final chapter of this thesis summarises the findings from these four studies and discusses their implications within a broader research context.Mode of access: World wide web1 online resource (xiv, 285, [25] pages) graphs (some colour

    Category-selective human brain processes elicited in fast periodic visual stimulation streams are immune to temporal predictability

    No full text
    Recording direct neural activity when periodically inserting exemplars of a particular category in a rapid visual stream of other objects offers an objective and efficient way to quantify perceptual categorization and characterize its spatiotemporal dynamics. However, since periodicity entails predictability, perceptual categorization processes identified within this framework may be partly generated or modulated by temporal expectations. Here we present a stringent test of the hypothesis that temporal predictability generates or modulates category-selective neural processes as measured in a rapid periodic visual stimulation stream. In Experiment 1, we compare neurophysiological responses to periodic and nonperiodic (i.e., unpredictable) variable face stimuli in a fast (12Hz) visual stream of nonface objects. In Experiment 2, we assess potential responses to rare (10%) omissions of periodic face events (i.e., violations of periodicity) in the same fast visual stream. Overall, our observations indicate that category(face)-selective processes elicited in a fast periodic stream of visual objects are immune to temporal predictability. These observations do not support a predictive coding framework interpretation of category-change detection in the human brain and have important implications for understanding automatic human perceptual categorization in a rapidly changing (i.e., dynamic) visual scene

    Pointing it out : processing faces in the absence of attention

    No full text
    Masked priming is a phenomenon in which subliminal stimuli modulate responses to subsequent visible targets. In congruence priming paradigms, subjects typically respond faster to congruent targets (i.e. of the same category as the preceding prime) than to incongruent targets. Such effects are generally only observed when the prime stimulus is attended. This is not the case for faces, which produce priming effects both when attended and unattended. But is face processing truly invulnerable to attentional modulation, or simply more robust to it than other stimuli? We hypothesised that congruence priming should be evident earlier when the face is attended, and tested this possibility using a reaching paradigm that indexes priming at a stage in which stimulus processing is still ongoing. Using this sensitive measure, we find converging evidence that the visual system is able to process masked faces in the absence of attention, and speculate on the nature of attentional effects on this processing.1 page(s

    The upper-hemifield advantage for masked face processing: Not just an attentional bias.

    No full text
    Recent evidence suggests that face processing may be more robust in the upper visual field (UVF) than in the lower visual field (LVF). We asked whether this UVF advantage is due to an upward bias in participants' visuospatial attention. Participants classified the sex of a UVF or LVF target face that was preceded by a congruent or incongruent masked prime face. We manipulated spatial attention within subjects by varying the predictability of target location across sessions (UVF:LVF ratio of 50:50 on Day 1 and 20:80 on Day 2). When target location was unpredictable, priming emerged earlier in the UVF (~165 ms) than the LVF (~195 ms). This UVF advantage was reversed when targets were more likely to be presented in the LVF. Here priming arose earlier for LVF targets (~53 ms) than UVF targets (~165 ms). Critically, however, UVF primes were processed to the same degree regardless of whether spatial attention was diffuse (Day 1) or deployed elsewhere (Day 2). We conclude that, while voluntarily directed spatial attention is sufficient to modulate the processing of masked faces in the LVF, it is not sufficient to explain the UVF advantage for masked face processing

    Critical information thresholds underlying generic and familiar face categorisation at the same face encounter

    Get PDF
    Seeing a face in the real world provokes a host of automatic categorisations related to sex, emotion, identity, and more. Such individual facets of human face recognition have been extensively examined using overt categorisation judgements, yet their relative informational dependencies during the same face encounter are comparatively unknown. Here we used EEG to assess how increasing access to sensory input governs two ecologically relevant brain functions elicited by seeing a face: Distinguishing faces and nonfaces, and recognising people we know. Observers viewed a large set of natural images that progressively increased in either image duration (experiment 1) or spatial frequency content (experiment 2). We show that in the absence of an explicit categorisation task, the human brain requires less sensory input to categorise a stimulus as a face than it does to recognise whether that face is familiar. Moreover, where sensory thresholds for distinguishing faces/nonfaces were remarkably consistent across observers, there was high inter-individual variability in the lower informational bound for familiar face recognition, underscoring the neurofunctional distinction between these categorisation functions. By i) indexing a form of face recognition that goes beyond simple low-level differences between categories, and ii) tapping multiple recognition functions elicited by the same face encounters, the information minima we report bear high relevance to real-world face encounters, where the same stimulus is categorised along multiple dimensions at once. Thus, our finding of lower informational requirements for generic vs. familiar face recognition constitutes some of the strongest evidence to date for the intuitive notion that sensory input demands should be lower for recognising face category than face identity
    corecore