38,283 research outputs found
Recommended from our members
High reward enhances perceptual learning.
Studies of perceptual learning have revealed a great deal of plasticity in adult humans. In this study, we systematically investigated the effects and mechanisms of several forms (trial-by-trial, block, and session rewards) and levels (no, low, high, subliminal) of monetary reward on the rate, magnitude, and generalizability of perceptual learning. We found that high monetary reward can greatly promote the rate and boost the magnitude of learning and enhance performance in untrained spatial frequencies and eye without changing interocular, interlocation, and interdirection transfer indices. High reward per se made unique contributions to the enhanced learning through improved internal noise reduction. Furthermore, the effects of high reward on perceptual learning occurred in a range of perceptual tasks. The results may have major implications for the understanding of the nature of the learning rule in perceptual learning and for the use of reward to enhance perceptual learning in practical applications
Decoding neural responses to temporal cues for sound localization
The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies. DOI: http://dx.doi.org/10.7554/eLife.01312.001
Recommended from our members
A computational theory of motor learning
In this paper we present a computational theory of human motor performance and learning. The theory is implemented as a running AI system called MAGGIE. Given a description of a desired movement as input, the system generates simulated motor behavior as output. The theory states that skills are encoded as motor schemas, which specify the positions and velocities of a limb at selected points in time. Moreover, there exist two natural representations for such knowledge: viewer-centered schemas describe visually perceived behavior, and joint-centered schemas are used to generate behavior. When the model acts upon these two representational formats, they exhibit quite different behavioral characteristics. MAGGIE performs the desired movement within a feedback control paradigm, monitoring for errors and correcting them when it detects them. Learning involves improving the joint-centered schema over many practice trials; this reduces the need for monitoring. The model accounts for a number of well-documented motor phenomena, including the speed-accuracy trade-off and the gradual improvement in performance with practice. It also makes several testable predictions. We close with a discussion of the theory's strengths and weaknesses, along with directions for future research
Optimizing the energy consumption of spiking neural networks for neuromorphic applications
In the last few years, spiking neural networks have been demonstrated to
perform on par with regular convolutional neural networks. Several works have
proposed methods to convert a pre-trained CNN to a Spiking CNN without a
significant sacrifice of performance. We demonstrate first that
quantization-aware training of CNNs leads to better accuracy in SNNs. One of
the benefits of converting CNNs to spiking CNNs is to leverage the sparse
computation of SNNs and consequently perform equivalent computation at a lower
energy consumption. Here we propose an efficient optimization strategy to train
spiking networks at lower energy consumption, while maintaining similar
accuracy levels. We demonstrate results on the MNIST-DVS and CIFAR-10 datasets
- …