18 research outputs found
Purkinje Cell Conditioned Pause Parameters
Trial-by-trial analysis of the conditional pause in the spontaneous firing of the Purkinje cell; data from Hesslow la
Time Scale Invariance in Reinforcement Learning
Collaboration with Timothy Shahan on Time Scale Invariance and Delayed Reinforcement in Reinforcement Learnin
Informativeness, contingency and time scale invariance in associative learning
Experiments and theory on inhibitory conditioning associative learning using the truly random contro
Information Theory, Memory, Prediction, and Timing in Associative Learning
Two information-theoretic principles—maximum entropy, and minimum description length—dictate a computational model of associative learning that explains cue competition (assignment of credit) and response timing. The theory's primitives are two cue types—state cues and point cues—and two stochastic distributions. The preferred stochastic model gives the relative code lengths for an efficient encoding of the data already seen; it predicts the data not yet seen; and the associated hazard function roughly predicts the observed timing of anticipatory (conditioned) behavior. State cues use the exponential distribution to encode, predict and time; point cues use a form of the Gaussian distribution that allows for event failure. An implementation of the refined minimum-description-length approach to stochastic model selection (Rissanen 1999) determines which stochastic model best compresses the data, and hence which is the best predictive model for a given protocol. The model brings into sharp focus the need to focus neurobiological inquiry on the coding question in memory
Information Theory, Memory, Prediction, and Timing in Associative Learning
Two information-theoretic principles—maximum entropy, and minimum description length—dictate a computational model of associative learning that explains cue competition (assignment of credit) and response timing. The theory's primitives are two cue types—state cues and point cues—and two stochastic distributions. The preferred stochastic model gives the relative code lengths for an efficient encoding of the data already seen; it predicts the data not yet seen; and the associated hazard function roughly predicts the observed timing of anticipatory (conditioned) behavior. State cues use the exponential distribution to encode, predict and time; point cues use a form of the Gaussian distribution that allows for event failure. An implementation of the refined minimum-description-length approach to stochastic model selection (Rissanen 1999) determines which stochastic model best compresses the data, and hence which is the best predictive model for a given protocol. The model brings into sharp focus the need to focus neurobiological inquiry on the coding question in memory