4 research outputs found
Constraining bridges between levels of analysis : a computational justification for locally Bayesian learning
Different levels of analysis provide different insights into behavior: computational-level analyses determine the problem an organism must solve and algorithmic-level analyses determine the mechanisms that drive behavior. However, many attempts to model behavior are pitched at a single level of analysis. Research into human and animal learning provides a prime example, with some researchers using computational-level models to understand the sensitivity organisms display to environmental statistics but other researchers using algorithmic-level models to understand organisms’ trial order effects, including effects of primacy and recency. Recently, attempts have been made to bridge these two levels of analysis. Locally Bayesian Learning (LBL) creates a bridge by taking a view inspired by evolutionary psychology: Our minds are composed of modules that are each individually Bayesian but communicate with restricted messages. A different inspiration comes from computer science and statistics: Our brains are implementing the algorithms developed for approximating complex probability distributions. We show that these different inspirations for how to bridge levels of analysis are not necessarily in conflict by developing a computational justification for LBL. We demonstrate that a scheme that maximizes computational fidelity while using a restricted factorized representation produces the trial order effects that motivated the development of LBL. This scheme uses the same modular motivation as LBL, passing messages about the attended cues between modules, but does not use the rapid shifts of attention considered key for the LBL approximation. This work illustrates a new way of tying together psychological and computational constraints
Role of goal-orientated attention and expectations in visual processing and perception
Visual processing is not fixed, but changes dynamically depending on the spatiotemporal context
of the presented stimulus, and the behavioural task being performed. In this thesis, I
describe theoretical and experimental work that was conducted to investigate how and why
visual perception and neural responses are altered by the behavioural and statistical context of
presented stimuli.
The process by which stimulus expectations are acquired and then shape our sensory experiences
is not well understood. To investigate this, I conducted a psychophysics experiment
where participants were asked to estimate the direction of motion of presented stimuli, with
some directions presented more frequently than others. I found that participants quickly developed
expectations for the most frequently presented directions and that this altered their
perception of new stimuli, inducing biases in the perceived motion direction as well as visual
hallucinations in the absence of a stimulus. These biases were well explained by a model
that accounted for their behaviour using a Bayesian strategy, combining a learned prior of the
stimulus statistics with their sensory evidence using Bayes’ rule.
Altering the behavioural context of presented stimuli results in diverse changes to visual
neuron responses, including alterations in receptive field structure and firing rates. While these
changes are often thought to reflect optimization towards the behavioural task, what exactly is
being optimized and why different tasks produce such varying effects is unknown. To account
for the effects of a behavioural task on visual neuron responses, I extend previous Bayesian
models of visual processing, hypothesizing that the brain learns an internal model that predicts
how both the sensory input and the reward received for performing different actions are determined
by a common set of explanatory causes. Short-term changes in visual neural responses
would thus reflect optimization of this internal model to deal with changes in the sensory environment
(stimulus statistics) and behavioural demands (reward statistics), respectively. This
framework is used to predict a range of experimentally observed effects of goal-orientated attention
on visual neuron responses.
Together, these studies provide new insight into how and why sensory processing adapts in
response to changes in the environment. The experimental results support the idea of a very
plastic visual system, in which prior knowledge is rapidly acquired and used to shape perception.
The theoretical work extends previous Bayesian models of sensory processing, to understand
how visual neural responses are altered by the behavioural context of presetned stimuli.
Finally, these studies provide a unified description of ‘expectations’ and ‘goal-orientated attention’,
as corresponding to continuous adaptation of an internal generative model of the world
to account for newly received contextual information