22 research outputs found

    Biased Average Position Estimates in Line and Bar Graphs:Underestimation, Overestimation, and Perceptual Pull

    Get PDF
    In visual depictions of data, position (i.e., the vertical height of a line or a bar) is believed to be the most precise way to encode information compared to other encodings (e.g., hue). Not only are other encodings less precise than position, but they can also be prone to systematic biases (e.g., color category boundaries can distort perceived differences between hues). By comparison, position's high level of precision may seem to protect it from such biases. In contrast, across three empirical studies, we show that while position may be a precise form of data encoding, it can also produce systematic biases in how values are visually encoded, at least for reports of average position across a short delay. In displays with a single line or a single set of bars, reports of average positions were significantly biased, such that line positions were underestimated and bar positions were overestimated. In displays with multiple data series (i.e., multiple lines and/or sets of bars), this systematic bias still persisted. We also observed an effect of "perceptual pull", where the average position estimate for each series was 'pulled' toward the other. These findings suggest that, although position may still be the most precise form of visual data encoding, it can also be systematically biased

    The human visual system and CNNs can both support robust online translation tolerance following extreme displacements

    Get PDF
    Visual translation tolerance refers to our capacity to recognize objects over a wide range of different retinal locations. Although translation is perhaps the simplest spatial transform that the visual system needs to cope with, the extent to which the human visual system can identify objects at previously unseen locations is unclear, with some studies reporting near complete invariance over 10 degrees and other reporting zero invariance at 4 degrees of visual angle. Similarly, there is confusion regarding the extent of translation tolerance in computational models of vision, as well as the degree of match between human and model performance. Here, we report a series of eye-tracking studies (total N = 70) demonstrating that novel objects trained at one retinal location can be recognized at high accuracy rates following translations up to 18 degrees. We also show that standard deep convolutional neural networks (DCNNs) support our findings when pretrained to classify another set of stimuli across a range of locations, or when a global average pooling (GAP) layer is added to produce larger receptive fields. Our findings provide a strong constraint for theories of human vision and help explain inconsistent findings previously reported with convolutional neural networks (CNNs)

    The influence of visual flow and perceptual load on locomotion speed

    Get PDF
    Visual flow is used to perceive and regulate movement speed during locomotion. We assessed the extent to which variation in flow from the ground plane, arising from static visual textures, influences locomotion speed under conditions of concurrent perceptual load. In two experiments, participants walked over a 12-m projected walkway that consisted of stripes that were oriented orthogonal to the walking direction. In the critical conditions, the frequency of the stripes increased or decreased. We observed small, but consistent effects on walking speed, so that participants were walking slower when the frequency increased compared to when the frequency decreased. This basic effect suggests that participants interpreted the change in visual flow in these conditions as at least partly due to a change in their own movement speed, and counteracted such a change by speeding up or slowing down. Critically, these effects were magnified under conditions of low perceptual load and a locus of attention near the ground plane. Our findings suggest that the contribution of vision in the control of ongoing locomotion is relatively fluid and dependent on ongoing perceptual (and perhaps more generally cognitive) task demands

    Experiment 3

    No full text

    Control over fixation duration by foveal and peripheral sensory evidence

    No full text
    Identify the relative weighting of foveal and peripheral sensory evidence in the control of fixation duratio

    Experiment 3

    No full text

    Information foraging for perceptual decisions

    No full text
    We tested an information foraging framework to characterise the mechanisms that drive active (visual) sampling behaviour in decision problems that involve multiple sources of information. Experiments 1-3 involved participants making an absolute judgement about the direction of motion of a single random dot motion pattern. In Experiment 4, participants made a relative comparison between two motion patterns that could only be sampled sequentially. Our results show that: (i) Information (about noisy motion information) grows to an asymptotic level that depends on the quality of the information source; (ii) The limited growth is due to unequal weighting of the incoming sensory evidence, with early samples being weighted more heavily; (iii) Little information is lost once a new source of information is being sampled; (iv) The point at which the observer switches from one source to another is governed by on-line monitoring of his or her degree of (un)certainty about the sampled source. These findings demonstrate that the sampling strategy in perceptual decision-making is under some direct control by ongoing cognitive processing. More specifically, participants are able to track a measure of (un)certainty and use this information to guide their sampling behaviour

    Experiment 1

    No full text

    Prioritisation of Attention Allocation in a Static Multiple Target Search Task: Experiment 3

    No full text
    Overall, a few previous studies have assessed the interaction between reward and prevalence on the allocation of attention (Navalpakkam, Koch & Perona 2009; Won & Leber, 2016; Clark & Gilchrist, 2018). The aim of the proposed series of experiments is to test the interaction of these two effects and investigate whether increasing the reward value assigned to a low prevalent target, can increase its detectability. Moreover, in the proposed series of experiments, the effects of prioritisation on object-based, rather than spatial-based, attention will be investigated in a static Multiple Target Search (MTS) task with real-life stimuli. As a first step towards achieving this aim, we needed to replicate the basic target prevalence effect and reward effect in the current modified MTS paradigm with more realistic objects. After that, the interaction of prevalence and reward effect can be explored, looking at whether or not prevalence effect can be controlled or even reversed, through the manipulation of reward. In the first experiment of this series (https://osf.io/8x647) evidence in favour of the prevalence effect in the current modified MTS task was found, indicated by the quicker and more accurate detection of high versus low prevalent targets. In the second experiment of this series (https://osf.io/gnjbx/) evidence supporting the effect of reward on target detection (Kiss, Driver, & Eimer, 2009; Krebs, Boehler, & Woldorff, 2010) has been found, indicated by the quicker and more accurate detection of high versus low rewarded targets. This particular experiment now aims to combine both prevalence and reward effects in order to explore whether detection efficiency of low prevalent targets can be improved through reward. Therefore, as prevalence of targets decreases, the reward that participants will receive upon quick and accurate detection will increase. The question is whether reward can be used to control and even reverse the prevalence effect
    corecore