79 research outputs found
Phase diagram of ToM evolution.
<p>Each pie chart depict the evolutionary stable state that is induced by a particular combination of amount of learning τ (x-axis) and proportion ω of cooperative interactions (y-axis).</p
The Social Bayesian Brain: Does Mentalizing Make a Difference When We Learn?
<div><p>When it comes to interpreting others' behaviour, we almost irrepressibly engage in the attribution of mental states (beliefs, emotions…). Such "mentalizing" can become very sophisticated, eventually endowing us with highly adaptive skills such as convincing, teaching or deceiving. Here, sophistication can be captured in terms of the depth of our recursive beliefs, as in "I think that you think that I think…" In this work, we test whether such sophisticated recursive beliefs subtend learning in the context of social interaction. We asked participants to play repeated games against artificial (Bayesian) mentalizing agents, which differ in their sophistication. Critically, we made people believe either that they were playing against each other, or that they were gambling like in a casino. Although both framings are similarly deceiving, participants win against the artificial (sophisticated) mentalizing agents in the social framing of the task, and lose in the non-social framing. Moreover, we find that participants' choice sequences are best explained by sophisticated mentalizing Bayesian learning models only in the social framing. This study is the first demonstration of the added-value of mentalizing on learning in the context of repeated social interactions. Importantly, our results show that we would not be able to decipher intentional behaviour without a priori attributing mental states to others.</p></div
VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data
<div><p>This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization.</p></div
Comparison of deterministic and stochastic dynamical systems.
<p>This figure summarizes the VB comparison of deterministic (upper row) and stochastic (lower row) variants of a Lorenz dynamical system, given data simulated under the stochastic variant of the model. <b>Upper left</b>: fitted data (x-axis) is plotted against simulated data (y-axis), for the deterministic case. Perfect model fit would align all points on the red line. <b>Lower left</b>: same format, for the stochastic case. <b>Upper middle</b>: 95% posterior confidence intervals on hidden-states dynamics. Recall that for deterministic systems, uncertainty in the hidden states arises from evolution parameters' uncertainty. <b>Lower middle</b>: same format, stochastic system. <b>Upper right</b>: residuals' empirical autocorrelation (y-axis) as a function of temporal lag (x-axis), for the deterministic system. <b>Lower right</b>: same format, stochastic system.</p
Improving Q-learning models with inversion diagnostics.
<p>This figure demonstrates the added-value of Volterra decompositions, when deriving learning models with changing learning rates. <b>Upper left</b>: simulated belief (blue/red: outcome probability for the first/second action, green/magenta: volatility of the outcome contingency for the first/second action) of the Bayesian volatile learner (y-axis) plotted against trials (x-axis). <b>Lower left</b>: estimated hidden states of the deterministic variant of the dynamic learning rate model (blue/green: first/second action value, red: learning rate). This model corresponds to the standard Q-learning model (the learning rate is constant over time). <b>Upper middle</b>: estimated hidden states of the stochastic variant of the dynamic learning rate model (same format). Note the wide posterior uncertainty around the learning rate estimates. <b>Lower middle</b>: Volterra decomposition of the stochastic learning rate (blue: agent's chosen action, green: winning action, red: winning action instability). <b>Upper right</b>: estimated hidden states of the augmented Q-learning model (same format as before). <b>Lower right</b>: Volterra decomposition of the augmented Q-learning model's learning rate (same format as before).</p
Effect of the micro-time resolution.
<p>This figure summarizes the effect of relying on either a slow (upper row) or fast (lower row) micro-time resolution, when inverting nonlinear dynamical systems. <b>Left</b>: same format as <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003441#pcbi-1003441-g006" target="_blank">Figure 6</a>. <b>Upper middle</b>: estimated hidden-states dynamics at low micro-time resolution (data samples are depicted using dots). <b>Lower middle</b>: same format, fast micro-time resolution. <b>Upper right</b>: parameters' posterior correlation matrix, at low micro-time resolution. <b>Lower middle</b>: same format, fast micro-time resolution.</p
- …