Predictions help guide learning. As we encounter objects in our environment, we make predictions about their value. When outcomes match our predictions, learning is not required. When outcomes are unexpected, however, we update our predictions to reflect our experience. Dopamine neurons are thought to facilitate this process by encoding reward prediction error, or the difference between actual and predicted reward. Despite decades of work on prediction errors and their role in learning, little is known about how they are calculated in the brain. To determine how dopamine neurons calculate prediction error, I combined optogenetic manipulations with extracellular recordings while mice engaged in classical conditioning.
In Chapter 1, I demonstrate that dopamine neurons perform subtraction, a computation that is ideal for reward learning but rarely observed in the brain. Furthermore, by carefully examining how individual dopamine neurons respond to various sizes of reward and levels of expectation, I reveal striking homogeneity from neuron to neuron. All dopamine neurons appear to follow the same function, just scaled up or down. This universal template ensures robust information coding, allowing each dopamine neuron to accurately calculate reward prediction error and broadcast this information to other brain areas vital for learning.
In Chapter 2, I attempt to uncover the inputs that dopamine neurons use to calculate prediction errors. In particular, I test the hypothesis that a group of inhibitory neurons surrounding dopamine neurons in the midbrain may provide information about expected reward. By selectively exciting and inhibiting these nearby neurons, I discover that they indeed play a causal role in prediction errors, inhibiting dopamine neurons when reward is expected. Together, my results help uncover the arithmetic and local circuitry underlying dopamine prediction errors