22 research outputs found
Biased Average Position Estimates in Line and Bar Graphs:Underestimation, Overestimation, and Perceptual Pull
In visual depictions of data, position (i.e., the vertical height of a line
or a bar) is believed to be the most precise way to encode information compared
to other encodings (e.g., hue). Not only are other encodings less precise than
position, but they can also be prone to systematic biases (e.g., color category
boundaries can distort perceived differences between hues). By comparison,
position's high level of precision may seem to protect it from such biases. In
contrast, across three empirical studies, we show that while position may be a
precise form of data encoding, it can also produce systematic biases in how
values are visually encoded, at least for reports of average position across a
short delay. In displays with a single line or a single set of bars, reports of
average positions were significantly biased, such that line positions were
underestimated and bar positions were overestimated. In displays with multiple
data series (i.e., multiple lines and/or sets of bars), this systematic bias
still persisted. We also observed an effect of "perceptual pull", where the
average position estimate for each series was 'pulled' toward the other. These
findings suggest that, although position may still be the most precise form of
visual data encoding, it can also be systematically biased
The human visual system and CNNs can both support robust online translation tolerance following extreme displacements
Visual translation tolerance refers to our capacity to recognize objects over a wide range of different retinal locations. Although translation is perhaps the simplest spatial transform that the visual system needs to cope with, the extent to which the human visual system can identify objects at previously unseen locations is unclear, with some studies reporting near complete invariance over 10 degrees and other reporting zero invariance at 4 degrees of visual angle. Similarly, there is confusion regarding the extent of translation tolerance in computational models of vision, as well as the degree of match between human and model performance. Here, we report a series of eye-tracking studies (total N = 70) demonstrating that novel objects trained at one retinal location can be recognized at high accuracy rates following translations up to 18 degrees. We also show that standard deep convolutional neural networks (DCNNs) support our findings when pretrained to classify another set of stimuli across a range of locations, or when a global average pooling (GAP) layer is added to produce larger receptive fields. Our findings provide a strong constraint for theories of human vision and help explain inconsistent findings previously reported with convolutional neural networks (CNNs)
The influence of visual flow and perceptual load on locomotion speed
Visual flow is used to perceive and regulate movement speed during locomotion. We assessed the extent to which variation in flow from the ground plane, arising from static visual textures, influences locomotion speed under conditions of concurrent perceptual load. In two experiments, participants walked over a 12-m projected walkway that consisted of stripes that were oriented orthogonal to the walking direction. In the critical conditions, the frequency of the stripes increased or decreased. We observed small, but consistent effects on walking speed, so that participants were walking slower when the frequency increased compared to when the frequency decreased. This basic effect suggests that participants interpreted the change in visual flow in these conditions as at least partly due to a change in their own movement speed, and counteracted such a change by speeding up or slowing down. Critically, these effects were magnified under conditions of low perceptual load and a locus of attention near the ground plane. Our findings suggest that the contribution of vision in the control of ongoing locomotion is relatively fluid and dependent on ongoing perceptual (and perhaps more generally cognitive) task demands
Control over fixation duration by foveal and peripheral sensory evidence
Identify the relative weighting of foveal and peripheral sensory evidence in the control of fixation duratio
Information foraging for perceptual decisions
We tested an information foraging framework to characterise the mechanisms that drive active (visual) sampling behaviour in decision problems that involve multiple sources of information. Experiments 1-3 involved participants making an absolute judgement about the direction of motion of a single random dot motion pattern. In Experiment 4, participants made a relative comparison between two motion patterns that could only be sampled sequentially. Our results show that: (i) Information (about noisy motion information) grows to an asymptotic level that depends on the quality of the information source; (ii) The limited growth is due to unequal weighting of the incoming sensory evidence, with early samples being weighted more heavily; (iii) Little information is lost once a new source of information is being sampled; (iv) The point at which the observer switches from one source to another is governed by on-line monitoring of his or her degree of (un)certainty about the sampled source. These findings demonstrate that the sampling strategy in perceptual decision-making is under some direct control by ongoing cognitive processing. More specifically, participants are able to track a measure of (un)certainty and use this information to guide their sampling behaviour
Grounding computational cognitive models
Cognitive scientists and neuroscientists are increasingly deploying computational models to develop testable theories of psychological functions and make quantitative predictions about cognition, brain activity and behaviour. Computational models are used to explain target phenomena such as experimental effects, individual and/or population differences. They do so by relating these phenomena to the underlying components of the model that map onto distinct cognitive mechanisms. These components make up a ``cognitive state space'', where different positions correspond to different cognitive states that produce variation in behaviour. We examine the rationale and practice of such model-based inferences and argue that model-based explanations typically miss a key ingredient: they fail to explain why and how agents occupy specific positions in this space. A critical insight is that the agent's position in the state space is not fixed, but that the behaviour they produce is the result of a trajectory. Therefore, we discuss (i) the constraints that limit movement in the state space, (ii) the reasons for moving around at all (i.e. agents' objectives); and (iii) the information and cognitive mechanisms that guide these movements. We review existing research practices, from experimental design to the model-based analysis of data, and discuss how these practices can (and should) be improved to capture the agent's dynamic trajectory in the state space. In so doing, we stand to gain better and more complete explanations of the variation in cognition and behaviour over time, between different environmental conditions and between different populations or individuals