4 research outputs found

    Book reports

    Get PDF

    Change-based population coding

    Get PDF
    One standard interpretation of networks of cortical neurons is that they form dynamical attractors. Computations such as stimulus estimation are performed by mapping inputs to points on the networks’ attractive manifolds. These points represent population codes for the stimulus values. However, this standard interpretation is hard to reconcile with the observation that the firing rates of such neurons constantly change following presentation of stimuli. Furthermore, these population codes are not robust to both dynamical noise and synaptic noise and learning the corresponding weight matrices has never been demonstrated which seriously limits the extent of their application. In this thesis, we address this problem in the context of an invariant discrimination task. We suggest an alternative view, in which computations that are performed over the course of the transient evolution of a recurrently-connected network are read out by monitoring the change in a readily computed statistic of the activity of the network. Such changes can be inherently invariant to irrelevant dimensions of variability in the input, a critical capacity for many tasks. We illustrate these ideas using a well-studied visual hyperacuity task, in which the computation is required to be invariant to the overall retinal location of the input. We show a class of networks based on a wide variety of recurrent interactions that perform nearly as well as an ideal observer for the task, and are robust to significant levels of noise. We also show that this way of performing computations is fast, accurate, readily learnable and robust to various forms of noise

    Position Variance, Recurrence and Perceptual Learning

    No full text
    Stimulus arrays are inevitably presented at different positions on the retina in visual tasks, even those that nominally require fixation. In par-ticular, this applies to many perceptual learning tasks. We show that per-ceptual inference or discrimination in the face of positional variance has a structurally different quality from inference about fixed position stimuli, involving a particular, quadratic, non-linearity rather than a purely lin-ear discrimination. We show the advantage taking this non-linearity into account has for discrimination, and suggest it as a role for recurrent con-nections in area VI, by demonstrating the superior discrimination perfor-mance of a recurrent network. We propose that learning the feedforward and recurrent neural connections for these tasks corresponds to the fast and slow components of learning observed in perceptual learning tasks.
    corecore