7 research outputs found

    An integrative effort: Bridging motivational intensity theory and recent neurocomputational and neuronal models of effort and control allocation

    No full text
    An increasing number of cognitive, neurobiological, and computational models have been proposed in the last decade, seeking to explain how humans allocate physical or cognitive effort. Most models share conceptual similarities with motivational intensity theory (MIT), an influential classic psychological theory of motivation. Yet, little effort has been made to integrate such models, which remain confined within the explanatory level for which they were developed, that is, psychological, computational, neurobiological, and neuronal. In this critical review, we derive novel analyses of three recent computational and neuronal models of effort allocation - the expected value of control theory, the reinforcement meta-learner (RML) model, and the neuronal model of attentional effort - and establish a formal relationship between these models and MIT. Our analyses reveal striking similarities between predictions made by these models, with a shared key tenet: a nonmonotonic relationship between perceived task difficulty and effort, following a sawtooth or inverted U shape. In addition, the models converge on the proposition that the dorsal anterior cingulate cortex may be responsible for determining the allocation of effort and cognitive control. We conclude by discussing the distinct contributions and strengths of each theory toward understanding neurocomputational processes of effort allocation. Finally, we highlight the necessity for a unified understanding of effort allocation, by drawing novel connections between different theorizing of adaptive effort allocation as described by the presented models

    An integrative effort : Bridging motivational intensity theory and recent neurocomputational and neuronal models of effort and control allocation

    No full text
    An increasing number of cognitive, neurobiological and computational models have been proposed in the last decade, seeking to explain how humans allocate physical or cognitive effort. Most models share conceptual similarities with motivational intensity theory (MIT), an influential classic psychological theory of motivation. Yet, little effort has been made to integrate such models, which remain confined within the explanatory level for which they were developed, i.e. psychological, computational, neurobiological and neuronal. In this critical review, we derive novel analyses of three recent computational and neuronal models of effort allocation—the expected value of control (EVC) theory, the reinforcement metalearner (RML) model, and the neuronal model of attentional effort— and establish a formal relationship between these models and MIT. Our analyses reveal striking similarities between predictions made by these models, with a shared key tenet: a non-monotonic relationship between perceived task difficulty and effort mobilization, following a saw-tooth or inverted-U shape. In addition, the models converge on the proposition that the dorsal anterior cingulate cortex (dACC) may be responsible for determining the allocation of effort and cognitive control. We conclude by discussing the distinct contributions and strengths of each theory toward understanding neurocomputational processes of effort allocation. Finally, we highlight the necessity for a unified understanding of effort allocation, by drawing novel connections between different theorizing of adaptive effort allocation as described by the presented models

    Multitasking capability versus learning efficiency in neural network architectures

    No full text
    One of the most salient and well-recognized features of human goal-directed behavior is our limited ability to conduct multiple demanding tasks at once. Previous work has identified overlap between task processing pathways as a limiting factor for multitasking performance in neural architectures. This raises an important question: insofar as shared representation between tasks introduces the risk of cross-talk and thereby limitations in multitasking, why would the brain prefer shared task representations over separate representations across tasks? We seek to answer this question by introducing formal considerations and neural network simulations in which we contrast the multitasking limitations that shared task representations incur with their benefits for task learning. Our results suggest that neural network architectures face a fundamental tradeoff between learning efficiency and multitasking performance in environments with shared structure between tasks
    corecore