168 research outputs found
Recommended from our members
Motion adaptation and attention: A critical review and meta-analysis
The motion aftereffect (MAE) provides a behavioural probe into the mechanisms underlying motion perception, and has been used to study the effects of attention on motion processing. Visual attention can enhance detection and discrimination of selected visual signals. However, the relationship between attention and motion processing remains contentious: not all studies find that attention increases MAEs. Our meta-analysis reveals several factors that explain superficially discrepant findings. Across studies (37 independent samples, 76 effects) motion adaptation was significantly and substantially enhanced by attention (Cohen's d = 1.12, p < .0001). The effect more than doubled when adapting to translating (vs. expanding or rotating) motion. Other factors affecting the attention-MAE relationship included stimulus size, eccentricity and speed. By considering these behavioural analyses alongside neurophysiological work, we conclude that feature-based (rather than spatial, or object-based) attention is the biggest driver of sensory adaptation. Comparisons between naïve and non-naïve observers, different response paradigms, and assessment of 'file-drawer effects' indicate that neither response bias nor publication bias are likely to have significantly inflated the estimated effect of attention
A Model of Local Adaptation
The visual system constantly adapts to different luminance levels when viewing natural scenes. The state of visual adaptation is the key parameter in many visual models. While the time-course of such adaptation is well understood, there is little known about the spatial pooling that drives the adaptation signal. In this work we propose a new empirical model of local adaptation, that predicts how the adaptation signal is integrated in the retina. The model is based on psychophysical measurements on a high dynamic range (HDR) display. We employ a novel approach to model discovery, in which the experimental stimuli are optimized to find the most predictive model. The model can be used to predict the steady state of adaptation, but also conservative estimates of the visibility(detection) thresholds in complex images.We demonstrate the utility of the model in several applications, such as perceptual error bounds for physically based rendering, determining the backlight resolution for HDR displays, measuring the maximum visible dynamic range in natural scenes, simulation of afterimages, and gaze-dependent tone mapping
Use of Local Image Information in Depth Edge Classification by Humans and Neural Networks
Humans can use local cues to distinguish image edges caused by a depth change from other types of edges (Vilankar et al., 2014). But which local cues? Here we use the SYNS database (Adams et al., 2016) to automatically label edges in images of natural scenes as depth or non-depth. We use this ground truth to identify the cues used by human observers and convolutional neural networks (CNNs) for edge classification. Eight observers viewed square image patches, each centered on an image edge, ranging in width from 0.6 to 2.4 degrees (8 to 32 pixels). Human judgments (depth/non-depth) were compared to responses of a CNN trained on the same task. Human performance improved with patch size (65%-74% correct) but remained well below CNN accuracy (82-86% correct). Agreement between humans and the CNN was above chance but lower than human-human agreement. Decision Variable Correlation (Sebastian & Geisler, in press) was used to evaluate the relationships between depth responses and local edge cues. Humans seem to rely primarily on contrast cues, specifically luminance contrast and red-green contrast across the edge. The CNN also relies on luminance contrast, but unlike humans it seems to make use of mean luminance and red-green intensity as well. These local luminance and color features provide valid cues for depth edge discrimination in natural scenes
The Southampton-York Natural Scenes (SYNS) dataset: statistics of surface attitude
Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude
Recommended from our members
Investigating Emotional Body Posture Recognition in Adolescents with Conduct Disorder Using Eye-Tracking Methods.
Funder: Kids CompanyAdolescents with Conduct Disorder (CD) show deficits in recognizing facial expressions of emotion, but it is not known whether these difficulties extend to other social cues, such as emotional body postures. Moreover, in the absence of eye-tracking data, it is not known whether such deficits, if present, are due to a failure to attend to emotionally informative regions of the body. Male and female adolescents with CD and varying levels of callous-unemotional (CU) traits (n = 45) and age- and sex-matched typically-developing controls (n = 51) categorized static and dynamic emotional body postures. The emotion categorization task was paired with eye-tracking methods to investigate relationships between fixation behavior and recognition performance. Having CD was associated with impaired recognition of static and dynamic body postures and atypical fixation behavior. Furthermore, males were less likely to fixate emotionally-informative regions of the body than females. While we found no effects of CU traits on body posture recognition, the effects of CU traits on fixation behavior varied according to CD status and sex, with CD males with lower levels of CU traits showing the most atypical fixation behavior. Critically, atypical fixation behavior did not explain the body posture recognition deficits observed in CD. Our findings suggest that CD-related impairments in recognition of body postures of emotion are not due to attentional issues. Training programmes designed to ameliorate the emotion recognition difficulties associated with CD may need to incorporate a body posture component
Investigating Emotional Body Posture Recognition in Adolescents with Conduct Disorder Using Eye-Tracking Methods.
Funder: Kids CompanyAdolescents with Conduct Disorder (CD) show deficits in recognizing facial expressions of emotion, but it is not known whether these difficulties extend to other social cues, such as emotional body postures. Moreover, in the absence of eye-tracking data, it is not known whether such deficits, if present, are due to a failure to attend to emotionally informative regions of the body. Male and female adolescents with CD and varying levels of callous-unemotional (CU) traits (n = 45) and age- and sex-matched typically-developing controls (n = 51) categorized static and dynamic emotional body postures. The emotion categorization task was paired with eye-tracking methods to investigate relationships between fixation behavior and recognition performance. Having CD was associated with impaired recognition of static and dynamic body postures and atypical fixation behavior. Furthermore, males were less likely to fixate emotionally-informative regions of the body than females. While we found no effects of CU traits on body posture recognition, the effects of CU traits on fixation behavior varied according to CD status and sex, with CD males with lower levels of CU traits showing the most atypical fixation behavior. Critically, atypical fixation behavior did not explain the body posture recognition deficits observed in CD. Our findings suggest that CD-related impairments in recognition of body postures of emotion are not due to attentional issues. Training programmes designed to ameliorate the emotion recognition difficulties associated with CD may need to incorporate a body posture component
The Monocular Depth Estimation Challenge
This paper summarizes the results of the first Monocular Depth Estimation
Challenge (MDEC) organized at WACV2023. This challenge evaluated the progress
of self-supervised monocular depth estimation on the challenging SYNS-Patches
dataset. The challenge was organized on CodaLab and received submissions from 4
valid teams. Participants were provided a devkit containing updated reference
implementations for 16 State-of-the-Art algorithms and 4 novel techniques. The
threshold for acceptance for novel techniques was to outperform every one of
the 16 SotA baselines. All participants outperformed the baseline in
traditional metrics such as MAE or AbsRel. However, pointcloud reconstruction
metrics were challenging to improve upon. We found predictions were
characterized by interpolation artefacts at object boundaries and errors in
relative object positioning. We hope this challenge is a valuable contribution
to the community and encourage authors to participate in future editions.Comment: WACV-Workshops 202
Jahrbuch des Archivs der deutschen Jugendbewegung. FĂĽnfter Band 1973
Rolf Gardiner; Alt-Wandervogel; Vom Jugendbund zum Lebensbund; Jugendbewegung und Arbeitsdienst; Wyneken und Spitteler; Karl BrĂĽgmann, Archiv der Jugendmusikbewegung; Archiv des Bayrischen Pfadfinderbunde
The efficiency of visual transparency
This thesis examines the phenomenon of visual transparency in a novel application of the efficiency approach. Transparency provides a useful stimulus to probe the visual mechanisms that underlie the visual surface representation, introduced in Chapter One. Previous research has found that there is a cost in processing visual transparency defined purely by motion or stereo cues. This has been interpreted in terms of visual mechanisms constraining the recovery of transparency. However, the cost for transparency may reflect the increased complexity of the stimuli. To address this issue I computed the efficiency for motion and stereo defined transparency tasks by comparing human performance with that of the ideal observer. The efficiency approach has two key advantages over traditional psychophysical measures: 1) it provides a performance measure normalised relative to the available information, 2) it is an absolute measure and can be compared directly across diverse tasks. I provide a review of the efficiency approach in Chapter Two. In Chapter Three, I present a study of the efficiency for speed discrimination of transparent random dot stimuli and comparable non-transparent random dot stimuli, as a function of the speed ratio and the dot density of the stimuli. In Chapter Four, I present a study of the efficiency for depth discrimination of transparent and non-transparent random dot stereograms, across a range of disparity ratios and dot densities. In Chapter Five, I present an extension of the efficiency approach to the motor domain, for the smooth pursuit of high-density transparent and non-transparent random-dot stimuli. Finally, in Chapter Six I provide physiologically plausible accounts of the findings
- …