48,965 research outputs found
Recommended from our members
Models for discriminating image blur from loss of contrast
Observers can discriminate between blurry and low-contrast images (Morgan, 2017). Wang and Simoncelli (2004) demonstrated that a code for blur is inherent to the phase relationships between localized pattern detectors of different scale. To test whether human observers actually use local phase coherence when discriminating between image blur and loss of contrast, we compared phase-scrambled chessboards with unscrambled chessboards. Although both stimuli had identical amplitude spectra, local phase coherence was disrupted by phase-scrambling. Human observers were required to concurrently detect and identify (as contrast or blur) image manipulations in the 2x2 forced-choice paradigm (Nachmias & Weber, 1975; Watson & Robson, 1981) traditionally considered to be a litmus test for "labelled lines" (i.e. detection mechanisms that can be distinguished on the basis of their preferred stimuli). Phase scrambling reduced some observers’ ability to discriminate between blur and a reduction in contrast. However, none of our observers produced data consistent with Watson & Robson’s most stringent test for labelled lines, regardless whether phases were scrambled or not. Models of performance fit significantly better when either a) the blur detector also responded to contrast modulations, b) the contrast detector also responded to blur modulations, or c) noise in the two detectors was anticorrelate
Recommended from our members
A visual search asymmetry for novelty in the visual field based on sensory adaptation
The ability to detect sudden changes in the environment is important for survival. However, studies of “change blindness” have shown that image differences are hard to detect when a time delay or a mask is imposed between the different images. However, when sensory adaptation is permitted by accuratefixation, we find that change detection is not only possible but asymmetrical: a single changed target amongst 15 unchanging distractors is much easier to detect than a target defined by its lack of change. Although adaptation may selectively reduce the apparent contrast of unchanged objects, the asymmetry in “change salience” cannot be attributed to any such reduction because genuine reductions in target contrast increase, rather than decrease, target detectability. Analogous results preclude attribution to apparent differences between a) target onset and distractor onset and b) their temporal frequencies (both flickered at 7.5 Hz, minimizing afterimages). Our results demonstrate a hitherto underappreciated (or unappreciated) advantage conferred by low-level sensory adaptation: it automatically elevates the salience of previously absent objects
Gain control of saccadic eye movements is probabilistic
Saccades are rapid eye movements that orient the visual axis toward objects of interest to allow their processing by the central, highacuity retina. Our ability to collect visual information efficiently relies on saccadic accuracy, which is limited by a combination of uncertainty in the location of the target and motor noise. It has been observed that saccades have a systematic tendency to fall short of their intended targets, and it has been suggested that this bias originates from a cost function that overly penalizes hypermetric errors. Here we tested this hypothesis by systematically manipulating the positional uncertainty of saccadic targets. We found that increasing uncertainty produced not only a larger spread of the saccadic endpoints but also more hypometric errors and a systematic bias toward the average of target locations in a given block, revealing that prior knowledge was integrated into saccadic planning. Moreover, by examining how variability and bias co-varied across conditions, we estimated the asymmetry of the cost function and found that it was related to individual differences in the additional time needed to program secondary saccades for correcting hypermetric errors, relative to hypometric ones. Taken together, these findings reveal that the saccadic system uses a probabilistic-Bayesian control strategy to compensate for uncertainty in a statistically principled way and to minimize the expected cost of saccadic errors
Recommended from our members
Organic Geochemistry of a Hydrocarbon-rich Calcarenite from the Chicxulub Scientific Drilling Program
The organic geochemistry of hydrocarbon-rich core material recovered by the CSDP is examined to establish whether hydrocarbons are associated with the migration and emplacement of organic matter by post-impact hydrothermal activity
Paradoxes in Fair Computer-Aided Decision Making
Computer-aided decision making--where a human decision-maker is aided by a
computational classifier in making a decision--is becoming increasingly
prevalent. For instance, judges in at least nine states make use of algorithmic
tools meant to determine "recidivism risk scores" for criminal defendants in
sentencing, parole, or bail decisions. A subject of much recent debate is
whether such algorithmic tools are "fair" in the sense that they do not
discriminate against certain groups (e.g., races) of people.
Our main result shows that for "non-trivial" computer-aided decision making,
either the classifier must be discriminatory, or a rational decision-maker
using the output of the classifier is forced to be discriminatory. We further
provide a complete characterization of situations where fair computer-aided
decision making is possible
Recommended from our members
Calculation efficiencies for mean numerosity
Relative numerosity is traditionally studied using texture pairs. Observers must decide which member of each pair has the greater total number of texture elements. Our textures were segregated into non-overlapping “sectors” containing between 0 and 4 elements, and our observers were asked to select the texture containing the greater average number of texture elements (i.e. per sector). If observers were more sensitive to total numerosity than average numerosity, their performances (quantified by the just-noticeable Weber fraction) should have been better when the two textures occupied the same number of sectors than when they occupied unequal numbers of sectors. However, we recorded Weber fractions between 11 and 13% for all observers in all conditions. These performances were comparable with an otherwise-ideal observer whose decisions were based on between 3 and 5 sectors in each texture. We conjecture that traditional numerosity discriminations are based on similarly small numbers of element clusters
Recommended from our members
Orientation-defined boundaries are detected with low efficiency
When compared with other summary statistics (mean size, size variance, orientation variance), visual estimates of average orientation are inefficient. Observers act as if they use information from no more than two or three items. We hypothesised that observers would attain greater sampling efficiency when their task was to perform a texture segmentation rather than a did not require an explicit representation of mean orientation. We tested this hypothesis using a texture-segmentation task. Two arrays of 32 wavelets each were presented; one left and one right of fixation. Orientations in the target array were sampled from wrapped normal distributions having two different means with the same variance. One distribution defined orientations above the horizontal meridian, the other defined orientations below the meridian. All orientations in the other array were defined by a single wrapped normal distribution having the same variance as each of the distributions in the target array. Contrary to our hypothesis, results indicate that observers effectively ignored all but one item from the top and bottom of each array. In fact, we found no change in the threshold difference between the target's two means when all but one item from the top and bottom of each array were removed. We are forced to conclude that the visual system does not compute the average of more than a few orientations, even for texture segmentation
Recommended from our members
Pre-cues’ elevation of sensitivity is not only pre-attentive, but largely monocular
Visual sensitivity can be heightened in the vicinity of an appropriate pre-cue. Experiments with multiple, non-informative pre-cues suggest that this facilitation should not be attributed to focal attention. The number of simultaneously appearing pre-cues seems to be irrelevant; contrast thresholds are lowest for targets that appear in a pre-cued position. Here, we report that pre-cues become less effective when they and the target are delivered to different eyes. We conclude that the mechanism responsible for heightened sensitivity has largely monocular input
- …