63 research outputs found

    Glossary of mathematical symbols.

    No full text
    <p>Glossary of mathematical symbols.</p

    Source reconstruction results.

    No full text
    <p>Source reconstruction results.</p

    Behavioural modelling.

    No full text
    <p>(A) The HGF comprises an observer part, describing the beliefs inferred at 3 levels (low: predictions about target location/latency; middle: cue-target validity level; high: volatility of cue validity), and the response part, linking these beliefs to predicted responses. The full model assumes all 3 levels and a weighted influence of relevant (saturated blue/red) and irrelevant (unsaturated) predictions on participants’ responses. Grey: model states; orange: model parameters. (B) Three alternative observer models (HGF3, HGF2, RW) and 2 alternative response models (task-general: weighted influence of relevant and irrelevant predictions; task-specific: exclusive influence of relevant predictions) were subject to Bayesian model selection. Plot shows log-model evidence relative to the weakest model and indicates task-specific HGF2 as winning. (C) HGF-derived trial-by-trial time-series (representative participant) of predictions about target location/latency (; upper panels) and cue validity (; middle panels) and PEs about target location/latency (|<i>ε</i><sub>2</sub>|; lower panels). (D) Mean correlations between HGF regressors. (E) Correlation between the prior variance of validity level updates and mean accuracy across participants. Data pertaining this figure are available on Figshare <a href="https://figshare.com/s/2d2755bfdeea1cbb415f" target="_blank">https://figshare.com/s/2d2755bfdeea1cbb415f</a>. HGF, Hierarchical Gaussian Filter; HGF2, 2-level HGF; HGF3, 3-level HGF; PE, prediction error; RW, Rescorla-Wagner.</p

    The mountain car problem.

    No full text
    <p>This is a schematic representation of the mountain car problem: Left: The landscape or potential energy function that defines the motion of the car. This has a minima at . The mountain-car is shown at its uncontrolled stable position (transparent) and the desired parking position at the top of the hill on the right . Right: Forces experienced by the mountain-car at different positions due to the slope of the hill (blue). Critically, at the force is minus one and cannot be exceeded by the cars engine, due to the squashing function applied to action.</p

    Effects of trial-by-trial predictions and PEs on TF responses.

    No full text
    <p>Effects of trial-by-trial predictions and PEs on TF responses.</p

    DCM.

    No full text
    <p>(A) Frequency-by-frequency maps of modulatory effects of contextual relevance on prediction processing. Effects were modelled in a network of 4 interconnected areas, corresponding to the 4 regions in which significant effects of relevance on prediction-related responses were identified (cf. <a href="http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.2003143#pbio.2003143.g003" target="_blank">Fig 3B</a>). (B) The corresponding maps of modulatory effects of contextual relevance on PE processing, modelled in a network of 2 areas in which significant effects were identified (cf. <a href="http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.2003143#pbio.2003143.g003" target="_blank">Fig 3C</a>). (C) Principal frequency modes estimated for prediction-related responses across the modelled areas. (D) Significant modulatory parameters corresponding to the effects of contextual relevance on prediction-related responses. Each bar represents a significant modulation (by contextual relevance) of the influence of a particular frequency mode in 1 region on another frequency mode in another region. (E) Modulatory spectra of the relevance-related effects of A1 activity (left panel, corresponding to Mode 1) on prediction-induced activity in all regions (right panel, corresponding to an average across frequency modes weighted by the respective modulatory parameters). (F-H) Same as (C-E) but for PE processing (H, left panel: an average of Modes 2 and 3). Data pertaining this figure are available on Figshare <a href="https://figshare.com/s/2d2755bfdeea1cbb415f" target="_blank">https://figshare.com/s/2d2755bfdeea1cbb415f</a>. A1, primary auditory cortex; DCM, dynamic causal modelling; MTG, middle temporal gyrus; PE, prediction error; TPJ, temporoparietal junction; V1, calcarine cortex.</p

    Optimised parameters of the winning HGF model.

    No full text
    <p>Optimised parameters of the winning HGF model.</p

    The effect of precision (dopamine) on behaviour.

    No full text
    <p>Inferred states (top row) and trajectories through state-space (bottom row) under different levels of conditional uncertainty or expected precision. As in previous figures, the inferred sensory states (position in blue and velocity in green) are shown with their 90% confidence intervals. And the trajectories are superimposed on nullclines. As the expected precision falls, the inferred dynamics are less accountable to prior expectations, which become less potent in generating prediction errors and action. It is interesting to see that uncertainty about the states (gray area) increases, as precision falls and confidence is lost.</p

    An agent that thinks it is a Lorenz attractor.

    No full text
    <p>This figure illustrates the behaviour of an agent whose trajectories are drawn to a Lorenz attractor. However, this is no ordinary attractor; the trajectories are driven purely by action (displayed as a function of time in the right panels). Action tries to suppress prediction errors on motion through this three dimensional state-space (blue lines in the left panels). These prediction errors are the difference between sensed and expected motion based on the agent's generative model; (red arrows: evaluated at ). These prior expectations are based on a Lorentz attractor. The ensuing behaviour can be regarded as a form of chaos control. Critically, this autonomous behaviour is very resistant to random forces on the agent. This can be seen by comparing the top row (with no perturbations) with the middle row, where the first state has been perturbed with a smooth exogenous force (broken line). Note that action counters this perturbation and the ensuing trajectories are essentially unaffected. The bottom row shows exactly the same simulation but with action turned off. Here, the environmental forces cause the agents to precess randomly around the fixed point attractor of . These simulations used a log-precision on the random fluctuations of 16.</p

    Redundancy reduction.

    No full text
    <p>The sensory environment of an animal is highly correlated (redundant). The animal's job is to map such signals as efficiently as possible to its neuronal representations, which are limited by their dynamic range. One way to solve this problem rests on de-correlating the input to provide a minimum entropy description, followed by a gain controller. This form of sensory processing has been observed in the experiments by Laughlin <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003157#pcbi.1003157-Laughlin1" target="_blank">[49]</a>, where the circuit maps the de-correlated signal via its cumulative probability distribution to a neuronal response, thereby avoiding saturation. Modified from <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003157#pcbi.1003157-Atick1" target="_blank">[45]</a>.</p
    • …
    corecore