83,261 research outputs found

    A probabilistic threshold model: Analyzing semantic categorization data with the Rasch model

    Get PDF
    According to the Threshold Theory (Hampton, 1995, 2007) semantic categorization decisions come about through the placement of a threshold criterion along a dimension that represents items' similarity to the category representation. The adequacy of this theory is assessed by applying a formalization of the theory, known as the Rasch model (Rasch, 1960; Thissen & Steinberg, 1986), to categorization data for eight natural language categories and subjecting it to a formal test. In validating the model special care is given to its ability to account for inter- and intra-individual differences in categorization and their relationship with item typicality. Extensions of the Rasch model that can be used to uncover the nature of category representations and the sources of categorization differences are discussed

    Information Invariance and Quantum Probabilities

    Full text link
    We consider probabilistic theories in which the most elementary system, a two-dimensional system, contains one bit of information. The bit is assumed to be contained in any complete set of mutually complementary measurements. The requirement of invariance of the information under a continuous change of the set of mutually complementary measurements uniquely singles out a measure of information, which is quadratic in probabilities. The assumption which gives the same scaling of the number of degrees of freedom with the dimension as in quantum theory follows essentially from the assumption that all physical states of a higher dimensional system are those and only those from which one can post-select physical states of two-dimensional systems. The requirement that no more than one bit of information (as quantified by the quadratic measure) is contained in all possible post-selected two-dimensional systems is equivalent to the positivity of density operator in quantum theory.Comment: 8 pages, 1 figure. This article is dedicated to Pekka Lahti on the occasion of his 60th birthday. Found. Phys. (2009

    Generalization of color by chickens: experimental observations and a Bayesian model

    Get PDF
    Sensory generalization influences animals' responses to novel stimuli. Because color forms a perceptual continuum, it is a good subject for studying generalization. Moreover, because different causes of variation in spectral signals, such as pigmentation, gloss, and illumination, have differing behavioral significance, it may be beneficial to have adaptable generalization. We report on generalization by poultry chicks following differential training to rewarded (T+) and unrewarded (T−) colors, in particular on the phenomenon of peak shift, which leads to subjects preferring stimuli displaced away from T−. The first three experiments test effects of learning either a fine or a coarse discrimination. In experiments 1 and 2, peak shift occurs, but contrary to some predictions, the shift is smaller after the animal learned a fine discrimination than after it learned a coarse discrimination. Experiment 3 finds a similar effect for generalization on a color axis orthogonal to that separating T+ from T−. Experiment 4 shows that generalization is rapidly modified by experience. These results imply that the scale of a “perceptual ruler” is set by experience. We show that the observations are consistent with generalization following principles of Bayesian inference, which forms a powerful framework for understanding this type of behavior

    A Novel Predictive-Coding-Inspired Variational RNN Model for Online Prediction and Recognition

    Get PDF
    This study introduces PV-RNN, a novel variational RNN inspired by the predictive-coding ideas. The model learns to extract the probabilistic structures hidden in fluctuating temporal patterns by dynamically changing the stochasticity of its latent states. Its architecture attempts to address two major concerns of variational Bayes RNNs: how can latent variables learn meaningful representations and how can the inference model transfer future observations to the latent variables. PV-RNN does both by introducing adaptive vectors mirroring the training data, whose values can then be adapted differently during evaluation. Moreover, prediction errors during backpropagation, rather than external inputs during the forward computation, are used to convey information to the network about the external data. For testing, we introduce error regression for predicting unseen sequences as inspired by predictive coding that leverages those mechanisms. The model introduces a weighting parameter, the meta-prior, to balance the optimization pressure placed on two terms of a lower bound on the marginal likelihood of the sequential data. We test the model on two datasets with probabilistic structures and show that with high values of the meta-prior the network develops deterministic chaos through which the data's randomness is imitated. For low values, the model behaves as a random process. The network performs best on intermediate values, and is able to capture the latent probabilistic structure with good generalization. Analyzing the meta-prior's impact on the network allows to precisely study the theoretical value and practical benefits of incorporating stochastic dynamics in our model. We demonstrate better prediction performance on a robot imitation task with our model using error regression compared to a standard variational Bayes model lacking such a procedure.Comment: The paper is accepted in Neural Computatio
    corecore