42,685 research outputs found
Bayesian Filtering with Multiple Internal Models: Toward a Theory of Social Intelligence
To exhibit social intelligence, animals have to recognize whom they are communicating with. One way to make this inference is to select among internal generative models of each conspecific who may be encountered. However, these models also have to be learned via some form of Bayesian belief updating. This induces an interesting problem: When receiving sensory input generated by a particular conspecific, how does an animal know which internal model to update? We consider a theoretical and neurobiologically plausible solution that enables inference and learning of the processes that generate sensory inputs (e.g., listening and understanding) and reproduction of those inputs (e.g., talking or singing), under multiple generative models. This is based on recent advances in theoretical neurobiology—namely, active inference and post hoc (online) Bayesian model selection. In brief, this scheme fits sensory inputs under each generative model. Model parameters are then updated in proportion to the probability that each model could have generated the input (i.e., model evidence). The proposed scheme is demonstrated using a series of (real zebra finch) birdsongs, where each song is generated by several different birds. The scheme is implemented using physiologically plausible models of birdsong production. We show that generalized Bayesian filtering, combined with model selection, leads to successful learning across generative models, each possessing different parameters. These results highlight the utility of having multiple internal models when making inferences in social environments with multiple sources of sensory information
Recommended from our members
Assessing the detailed time course of perceptual sensitivity change in perceptual learning.
The learning curve in perceptual learning is typically sampled in blocks of trials, which could result in imprecise and possibly biased estimates, especially when learning is rapid. Recently, Zhao, Lesmes, and Lu (2017, 2019) developed a Bayesian adaptive quick Change Detection (qCD) method to accurately, precisely, and efficiently assess the time course of perceptual sensitivity change. In this study, we implemented and tested the qCD method in assessing the learning curve in a four-alternative forced-choice global motion direction identification task in both simulations and a psychophysical experiment. The stimulus intensity in each trial was determined by the qCD, staircase or random stimulus selection (RSS) methods. Simulations showed that the accuracy (bias) and precision (standard deviation or confidence bounds) of the estimated learning curves from the qCD were much better than those obtained by the staircase and RSS method; this is true for both trial-by-trial and post hoc segment-by-segment qCD analyses. In the psychophysical experiment, the average half widths of the 68.2% credible interval of the estimated thresholds from the trial-by-trial and post hoc segment-by-segment qCD analyses were both quite small. Additionally, the overall estimates from the qCD and staircase methods matched extremely well in this task where the behavioral rate of learning is relatively slow. Our results suggest that the qCD method can precisely and accurately assess the trial-by-trial time course of perceptual learning
Investigating Evaluation Measures in Ant Colony Algorithms for Learning Decision Tree Classifiers
Ant-Tree-Miner is a decision tree induction algorithm that is based on the Ant Colony Optimization (ACO) meta- heuristic. Ant-Tree-Miner-M is a recently introduced extension of Ant-Tree-Miner that learns multi-tree classification models. A multi-tree model consists of multiple decision trees, one for each class value, where each class-based decision tree is responsible for discriminating between its class value and all other values present in the class domain (one vs. all). In this paper, we investigate the use of 10 different classification quality evaluation measures in Ant-Tree-Miner-M, which are used for both candidate model evaluation and model pruning. Our experimental results, using 40 popular benchmark datasets, identify several quality functions that substantially improve on the simple Accuracy quality function that was previously used in Ant-Tree-Miner-M
- …