30 research outputs found

    Decision-Making: A Neuroeconomic Perspective

    Get PDF
    This article introduces and discusses from a philosophical point of view the nascent field of neuroeconomics, which is the study of neural mechanisms involved in decision-making and their economic significance. Following a survey of the ways in which decision-making is usually construed in philosophy, economics and psychology, I review many important findings in neuroeconomics to show that they suggest a revised picture of decision-making and ourselves as choosing agents. Finally, I outline a neuroeconomic account of irrationality

    The Fallacy of the Homuncular Fallacy

    Get PDF
    A leading theoretical framework for naturalistic explanation of mind holds that we explain the mind by positing progressively "stupider" capacities ("homunculi") until the mind is "discharged" by means of capacities that are not intelligent at all. The so-called homuncular fallacy involves violating this procedure by positing the same capacities at subpersonal levels. I argue that the homuncular fallacy is not a fallacy, and that modern-day homunculi are idle posits. I propose an alternative view of what naturalism requires that reflects how the cognitive sciences are actually integrating mind and matter

    Decision-Making: A Neuroeconomic Perspective

    Get PDF
    This article introduces and discusses from a philosophical point of view the nascent field of neuroeconomics, which is the study of neural mechanisms involved in decision-making and their economic significance. Following a survey of the ways in which decision-making is usually construed in philosophy, economics and psychology, I review many important findings in neuroeconomics to show that they suggest a revised picture of decision-making and ourselves as choosing agents. Finally, I outline a neuroeconomic account of irrationality.neuroeconomics; decision-making; rationality; ultimatum; philosophy; psychology

    Reinforcement-based Robotic Memory Controller

    Get PDF

    Which way do I go? Neural activation in response to feedback and spatial processing in a virtual T-maze

    No full text
    In 2 human event-related brain potential (ERP) experiments, we examined the feedback error-related negativity (fERN), an ERP component associated with reward processing by the midbrain dopamine system, and the N170, an ERP component thought to be generated by the medial temporal lobe (MTL), to investigate the contributions of these neural systems toward learning to find rewards in a "virtual T-maze" environment. We found that feedback indicating the absence versus presence of a reward differentially modulated fERN amplitude, but only when the outcome was not predicted by an earlier stimulus. By contrast, when a cue predicted the reward outcome, then the predictive cue (and not the feedback) differentially modulated fERN amplitude. We further found that the spatial location of the feedback stimuli elicited a large N170 at electrode sites sensitive to right MTL activation and that the latency of this component was sensitive to the spatial location of the reward, occurring slightly earlier for rewards following a right versus left turn in the maze. Taken together, these results confirm a fundamental prediction of a dopamine theory of the fERN and suggest that the dopamine and MTL systems may interact in navigational learning tasks

    From Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence

    Get PDF
    There is a vast literature within philosophy of mind that focuses on artificial intelligence, but hardly mentions methodological questions. There is also a growing body of work in philosophy of science about modeling methodology that hardly mentions examples from cognitive science. Here these discussions are connected. Insights developed in the philosophy of science literature about the importance of idealization provide a way of understanding the neural implausibility of connectionist networks. Insights from neurocognitive science illuminate how relevant similarities between models and targets are picked out, how modeling inferences are justified, and the metaphysical status of models

    A Model of Prefrontal Cortex Dopaminergic Modulation during the Delayed Alternation Task

    Get PDF
    International audienceWorking memory performance is modulated by the level of dopamine (DA) D1 receptors stimulation in the prefrontal cortex (PFC). This modulation is exerted at different time scales. Injection of D1 agonists/antagonists exerts a long-lasting influence (several minutes or hours) on PFC pyramidal neurons. In contrast, during performance of a cognitive task, the duration of the postsynaptic effect of phasic DA release is short lasting. The functional relationships of these two time scales of DA modulation remain poorly understood. Here we propose a model that combines these two time scales of DA modulation on a prefrontal neural network. The model links the cellular and behavioral levels during performance of the delayed alternation task. The network, which represents the activity of deep-layer pyramidal neurons with intrinsic neuro-nal properties, exhibits two stable states of activity that can be switched on and off by excitatory inputs from long-distance cortical areas arriving in superficial layers. These stable states allow PFC neurons to maintain representations during the delay period. The role of an increase of DA receptors stimulation is to restrict inputs arriving on the prefrontal network. The model explains how the level of working memory performance follows an inverted U-shape with an increased stimulation of DA D1 receptors. The model predicts that (1) D1 receptor agonists increase perseverations, (2) D1 antagonists increase distractability, and (3) the duration of the postsynaptic effect of phasic DA release in the PFC is adjusted to the delay period of the task. These results show how the precise duration of the postsynaptic effect of phasic DA release influences behavioral performance during a simple cognitive task

    Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison

    Get PDF
    A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the δ-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications

    A biologically inspired neuronal model of reward prediction error computation

    Get PDF
    International audienceThe neurocomputational model described here proposes that two dimensions involved in computation of reward prediction errors i.e magnitude and time could be computed separately and later combined unlike traditional reinforcement learning models. The model is built on biological evidences and is able to reproduce various aspects of classical conditioning, namely, the progressive cancellation of the predicted reward, the predictive firing from conditioned stimuli, and delineation of early rewards by showing firing for sooner early rewards and not for early rewards that occur with a longer latency in accordance with biological data

    In Search of the Neural Circuits of Intrinsic Motivation

    Get PDF
    Children seem to acquire new know-how in a continuous and open-ended manner. In this paper, we hypothesize that an intrinsic motivation to progress in learning is at the origins of the remarkable structure of children's developmental trajectories. In this view, children engage in exploratory and playful activities for their own sake, not as steps toward other extrinsic goals. The central hypothesis of this paper is that intrinsically motivating activities correspond to expected decrease in prediction error. This motivation system pushes the infant to avoid both predictable and unpredictable situations in order to focus on the ones that are expected to maximize progress in learning. Based on a computational model and a series of robotic experiments, we show how this principle can lead to organized sequences of behavior of increasing complexity characteristic of several behavioral and developmental patterns observed in humans. We then discuss the putative circuitry underlying such an intrinsic motivation system in the brain and formulate two novel hypotheses. The first one is that tonic dopamine acts as a learning progress signal. The second is that this progress signal is directly computed through a hierarchy of microcortical circuits that act both as prediction and metaprediction systems
    corecore