16 research outputs found

    Inferring Actions, Intentions, and Causal Relations in a Deep Neural Network

    Get PDF
    From a young age, we can select actions to achieve desired goals, infer the goals of other agents, and learn causal relations in our environment through social interactions. Crucially, these abilities are productive and generative: we can impute desires to others that we have never held ourselves. These abilities are often captured by only partially overlapping models, each requiring substantial changes to fit combinations of abilities. Here, in an attempt to unify previous models, we present a neural network underpinned by the linearly solvable Markov Decision Process (LMDP) framework which permits a distributed representation of tasks. The network contains two pathways: one captures the desirability of states, and another encodes the passive dynamics of state transitions in the absence of control. Interactions between pathways are bound by a principle of rational action, enabling generative inference of actions, goals, and causal relations supported by gradient updates to parts of the network

    Orthogonal representations for robust context-dependent task performance in brains and neural networks

    Get PDF
    How do neural populations code for multiple, potentially conflicting tasks? Here we used computational simulations involving neural networks to define “lazy” and “rich” coding solutions to this context-dependent decision-making problem, which trade off learning speed for robustness. During lazy learning the input dimensionality is expanded by random projections to the network hidden layer, whereas in rich learning hidden units acquire structured representations that privilege relevant over irrelevant features. For context-dependent decision-making, one rich solution is to project task representations onto low-dimensional and orthogonal manifolds. Using behavioral testing and neuroimaging in humans and analysis of neural signals from macaque prefrontal cortex, we report evidence for neural coding patterns in biological brains whose dimensionality and neural geometry are consistent with the rich learning regime

    Model sharing in the human medial temporal lobe

    Get PDF
    Effective planning involves knowing where different actions take us. However natural environments are rich and complex, leading to an exponential increase in memory demand as a plan grows in depth. One potential solution is to filter out features of the environment irrelevant to the task at hand. This enables a shared model of transition dynamics to be used for planning over a range of different input features. Here, we asked human participants (13 male, 16, female) to perform a sequential decision-making task, designed so that knowledge should be integrated independently of the input features (visual cues) present in one case but not in another. Participants efficiently switched between using a low (cue independent) and a high (cue specific) dimensional representation of state transitions. fMRI data identified the medial temporal lobe as a locus for learning state transitions. Within this region, multivariate patterns of BOLD responses as state associations changed (via trial-by-trial learning) were less correlated between trials with differing input features in the high compared to the low dimensional case, suggesting that these patterns switched between separable (specific to input features) and shared (invariant to input features) transition models. Finally, we show that transition models are updated more strongly following the receipt of positive compared to negative outcomes, a finding that challenges conventional theories of planning. Together, these findings propose a computational and neural account of how information relevant for planning can be shared and segmented in response to the vast array of contextual features we encounter in our world

    Analysis code

    No full text

    Human value learning and representation reflects rational adaptation to task demands

    No full text
    Supplementary Material, Data and Code for Human value learning and representation reflects rational adaptation to task demands

    Where does value come from?

    No full text
    The computational framework of reinforcement learning (RL) has allowed us to both understand biological brains and build successful artificial agents. However, in this article we highlight open challenges for RL as a model of animal behaviour in natural environments. We ask how the external reward function is designed for biological systems, and how we can account for the context sensitivity of valuation. We argue that rather than optimizing receipt of external reward signals, animals track current and desired internal states and seek to minimise the distance to goal across multiple value dimensions. Our framework can readily account for canonical phenomena observed in the fields of psychology, behavioural ecology, and economics, and recent findings from brain imaging studies of value-guided decision-making

    Data

    No full text
    Data & cod

    Inferring Actions, Intentions, and Causal Relations in a Neural Network

    No full text
    From a young age, we can select actions to achieve desired goals, infer the goals of other agents, and learn causal relations in our environment through social interactions. Crucially, these abilities are productive and generative: we can impute desires to others that we have never held ourselves. These abilities are often captured by only partially overlapping models, each requiring substantial changes to fit combinations of abilities. Here, in an attempt to unify previous models, we present a neural network underpinned by the linearly solvable Markov Decision Process (LMDP) framework which permits a distributed representation of tasks. The network contains two pathways: one captures the desirability of states, and another encodes the passive dynamics of state transitions in the absence of control. Interactions between pathways are bound by a principle of rational action, enabling generative inference of actions, goals, and causal relations supported by gradient updates to parts of the network

    The social side of gaming : how playing online computer games creates online and offline social support

    No full text
    Online gaming has gained millions of users around the globe, which have been shown to virtually connect, to befriend, and to accumulate online social capital. Today, as online gaming has become a major leisure time activity, it seems worthwhile asking for the underlying factors of online social capital acquisition and whether online social capital increases offline social support. In the present study, we proposed that the online game players’ physical and social proximity as well as their mutual familiarity influence bridging and bonding social capital. Physical proximity was predicted to positively influence bonding social capital online. Social proximity and familiarity were hypothesized to foster both online bridging and bonding social capital. Additionally, we hypothesized that both social capital dimensions are positively related to offline social support. The hypotheses were tested with regard to members of e-sports clans. In an online survey, participants (N = 811) were recruited via the online portal of the Electronic Sports League (ESL) in several countries. The data confirmed all hypotheses, with the path model exhibiting an excellent fit. The results complement existing research by showing that online gaming may result in strong social ties, if gamers engage in online activities that continue beyond the game and extend these with offline activities
    corecore