98 research outputs found

    The influence of dopamine on prediction, action and learning

    Get PDF
    In this thesis I explore functions of the neuromodulator dopamine in the context of autonomous learning and behaviour. I first investigate dopaminergic influence within a simulated agent-based model, demonstrating how modulation of synaptic plasticity can enable reward-mediated learning that is both adaptive and self-limiting. I describe how this mechanism is driven by the dynamics of agentenvironment interaction and consequently suggest roles for both complex spontaneous neuronal activity and specific neuroanatomy in the expression of early, exploratory behaviour. I then show how the observed response of dopamine neurons in the mammalian basal ganglia may also be modelled by similar processes involving dopaminergic neuromodulation and cortical spike-pattern representation within an architecture of counteracting excitatory and inhibitory neural pathways, reflecting gross mammalian neuroanatomy. Significantly, I demonstrate how combined modulation of synaptic plasticity and neuronal excitability enables specific (timely) spike-patterns to be recognised and selectively responded to by efferent neural populations, therefore providing a novel spike-timing based implementation of the hypothetical ‘serial-compound’ representation suggested by temporal difference learning. I subsequently discuss more recent work, focused upon modelling those complex spike-patterns observed in cortex. Here, I describe neural features likely to contribute to the expression of such activity and subsequently present novel simulation software allowing for interactive exploration of these factors, in a more comprehensive neural model that implements both dynamical synapses and dopaminergic neuromodulation. I conclude by describing how the work presented ultimately suggests an integrated theory of autonomous learning, in which direct coupling of agent and environment supports a predictive coding mechanism, bootstrapped in early development by a more fundamental process of trial-and-error learning

    Closed loop interactions between spiking neural network and robotic simulators based on MUSIC and ROS

    Get PDF
    In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC). This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning

    Learning Aquatic Locomotion with Animats

    Get PDF
    One of the challenges of researching spiking neural networks (SNN) is translation from temporal spiking behavior to classic controller output. While many encoding schemes exist to facilitate this translation, there are few benchmarks for neural networks that inherently utilize a temporal controller. In this work, we consider the common reinforcement problem of animat locomotion in an environment suited for evaluating SNNs. Using this problem, we explore novel methods of reward distribution as they impacts learning. Hebbian learning, in the form of spike time dependent plasticity (STDP), is modulated by a dopamine signal and affected by reward-induced neural activity. Different reward strategies are parameterized and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is used to find the best strategies for fixed animat morphologies. The contribution of this work is two-fold: to cast the problem of animat locomotion in a form directly applicable to simple temporal controllers, and to demonstrate novel methods for reward modulated Hebbian learning

    Un cadre en boucle fermée sur la prise de décision et l’apprentissage dans lescircuits préfrontaux des primates, par modélisation computationnelle et expérimentationvirtuelle

    Get PDF
    This thesis attempts to build a computational systems-level framework thatwould help to develop an understanding of the organization of the prefrontal cortex (PFC)and the basal ganglia (BG) systems and their functional interactions in the process ofdecision-making and goal-directed behaviour in humans. A videogame environment withan aritficial agent, Minecraft is used to design experiments to test the framework in anenvironment that could be more complex and realistic, if necessary. Malmo, a platformdeveloped by Microsoft, allows to communicate with the videogame Minecraft to designthe scenarios in the environment and control the behavior of the agent. The framework,along with virtual experimentation forms a closed-loop architecture for studyingthe high-level animal behavior. It is pointed out that the generic principles behind theflexible animal behaviors also give insights into developing artificial intelligence (A.I) thatis more general and autonomous in the nature of learning, in addition to the current A.Isystems that are specialized in a particular task.Behavior, of a human or an animal, is a pattern of responses to a certain stimulus(physical or abstract). A response is essentially a choice among several possible options orsimply a choice between whether or not to make a choice from the available options. Theneural correlates of decision-making in humans is an extensively sought after questionacross multiple fields ranging from behavioural psychology, economics to neuroscienceand artificial intelligence (AI). Especially in the field of neuroeconomics and AI, thereis a huge pursuit to understand the underpinnings of decision-making in brain. Withrapidly growing interest in understanding the neural substrates of decision-making, learningand behaviour, at least in higher order mammals like rodents, non-human primatesand humans, more research is leading to deeper questions about our understanding ofdecision-making itself. It is not so surprising because, given that any species, in somedegree or the other, depends on the mechanisms of action selection or decision-making forits survival in an uncertain environment. Humans are presumably the most flexible andadaptive decision-makers who can learn the underlying structure of the world, even if thestructure is hidden, and rapidly adapt their behaviour. The prefrontal cortex (PFC) hasbeen at the forefront of this proposition and is believed to have facilitated this evolutiontowards a wider repertoire of behaviours that emerge from underlying primitive actionselection mechanisms. It is highlighted that studying complex realistic decision-makingin ecological scenarios will require a more sophisticated experimentation methods thanthe regular numerical simulations used. The experiments designed in Minecraft can beused to test the framework in an environment that could be more complex and realistic,if necessary. Major value addition of a virtual environment and an agent interacting in itis, that the bodily characteristics of the agent can be emphasized (like needs) and theirrole in value-based decision making can be discussed. Subsequently the framework, alongwith virtual experimentation forms a closed-loop architecture for studying the high-levelanimal behavior.The neural systems framework in this work rests on the network dynamics betweenthe subsystems of PFC and BG. PFC is believed to play a crucial role, in executive functionslike planning, attention, goal-directed behavior, etc. BG are a group of sub-corticalnuclei that have been extensively studied in the field of motor control and action selection.Different regions in the PFC and structures within BG are anatomically organized, includinga respective sensory cortical region, in parallel and segregated loops (each of themreferred here as a CBG loop). These loops can be, on a high level, divided into 3 kinds: limbic loops, associative loops and sensori-motor loops. Imagine an animal interactingwith stimuli in an environment. Some of the most pertinent questions to the currentstate of the animal with respect to the stimuli present are : (i) What is (the value of) thisstimulus? (Preference) (ii) Why is this stimulus relevant to my current internal needs?(Need) (iii) Where is this stimulus located with respect to my reference in the currentenvironment (Orientation), and (iv) How do I reach the ’desired’ stimulus (Approach).Limbic loops address the questions What? and Why?. Sensori-motor loops are concernedwith the questions Where? and How?. Associative loops form a multi-modal associationof the current state information, for instance which stimulus in the limbic loops is atwhich position represented in the motor loops. Furthermore, in each of these loops, asthe subregion of PFC represents the chosen goal, the process of achieving the goal bysustained activation between the PFC subregion and the corresponding sensory corticalarea is described. Especially virtual experimentation helps highlight this phenomenon bydemonstrating flexible adjustments to action plan once the goal is selected.First, a comprehensive framework with the above mentioned parallel loops is implemented.All the four loops are algorithmically implemented, describing the mutual influencesbetween each of the prefrontal sub-regions. It is important to note that, althoughthere is no explicit hierarchy built in the system among the loops, there are two levelsof hierarchy that could implicitly arise. First, although the motor loops are free to makedecisions in the action space, with sufficient learning in the limbic space, the decisions inany of the limbic loops could lead the decisions in the sensori-motor space. through theassociative loop. Secondly, it is assumed that the fundamental motivation of the animalis internal homeostatis, that is to maintain its internal needs in acceptable bounds. Thus,in certain situations, the internal motivation might lead the dynamics in the limbic loops,with the Why? loop for internal motivation biasing the What? loop which might bemore stimulus-driven, when there is no pressing internal need. The inputs for the CBGloops is provided by the sensory perception of the framework that communicates theinformation provided by Malmo from the videogame environment to the correspondingrepresentations in the framework. Similarly the output of the framework is transformedto appropriate Malmo representations of action commands that drive the agent in theenvironment. Since the cognitive framework is described by several biological constraints,several adaptations have been made in the way the Malmo platform is used, in terms ofsensory perception of the environment and the motor control of the agent.Next, we use this framework to study more closely, the role of limbic loops in valueguideddecision making and goal-directed behavior. The emphasis rests on the limbicloops. Therefore the associative and sensori-motor loops are modeled algorithmically,taking help of the experimentation platform for motor control. As for the limbic loops,the orbitofrontal cortex (OFC) is the part of a loop for preferences and the anteriorcingulate cortex (ACC), for internal needs. These loops are formed through their limbiccounterpart in BG, ventral striatum (VS). VS has been widely studied and reported tobe encoding various substrates of value, forming an integral part of value-based decisionmaking. Simplistic scenarios are designed in the virtual environment using the agent andsome objects and appetitive rewards in the environment. The limbic loops have beenimplemented according to existing computational models of decision making in the BGand amygdala. Thus the framework and the experimental platform stand as a testbed tocomputational models of specific processes that have to fit in a bigger picture.Of the limbic loops, the role of OFC has been closely studied. Ranging over diversestudies across decades, OFC has been implicated in almost all aspects of decision-making- state representation, outcome prediction, action selection, outcome evaluation and primarily,learning. Furthermore, deficits or lesions of OFC were argued to cause multiplebehavioral impairments such as response inhibition for no longer rewarding stimulus,learning when reward contingencies are reversed etc. With more advanced lesion techniquesand keener analysis, several such observations were turned down. Nevertheless,the role of OFC in value-based decision making and learning is underlined time and again,while the exact ways in which it affects the process are still unknown. As part of this thesis,several outstanding observations about the role of OFC in behavior have been summarizedby consolidating numerous experimental evidences and reviews. To highlight a few,OFC is implied in : perceptual decision making and value-based decision making; withina single decision-making episode (trial), different kinds of involvement at a different phase(option presentation, action selection, outcome delivery etc.,); learning stimuli-outcome(pavlovian) and action-outcome (instrumental) associations. The neurons in OFC werefound to vividly correlate with the value of the outcomes, more interestingly expressing aphenomenon of range adaptation, adapting to the changing ranges of values. OFC is believedto learn a state space representation of the task space to be able to access partiallyobservable information for a decision. The structural heterogeneity of OFC adds to theinherent underlying complexity about studying the role of Orbitofrontal Cortex (OFC)in decision making, learning and goal-directed behavior. This has been studied in therecent years, with studies focused on dissociating the roles of lateral and medial subpartsof OFC. Often, ventromedial prefrontal cortex (vmPFC) is considered under medial OFC.Bouret et al., 2010, Noonan et al., 2010, Rudebeck & Murray 2011 are some of the fewcomprehensive studies that clearly argued for separate roles of lateral and medial OFC.Lastly, to explain the findings of different roles of lateral and medial regions of OFC,existing computational architecture of CBG loops, pavlovian learning in amygdala andmultiple evidences of amygdala-OFC-VS interactions are put together into a single model.The learning rules of reinforcement have been adapted to accommodate the appropriatecredit assignment (correct outcome to correct chosen stimulus) and the value differenceof the choice options. As a result, several findings from animal experiments studying theseparable roles, were replicated. Particularly in the context of different roles of lateraland medial OFC in decision making as a function of the value difference between options,distinct and dissociate roles of lateral and medial were observed. Medial OFC seemed tobe more crucial for the choice between two options that are close to each other, whereaslesions to medial OFC did not seem to affect the animal’s performance when the differencebetween the values of the options are sufficiently apart. On the contrary, surprisinglylateral OFC appeared to be crucial when the decisions are easy to make whereas lesionsto lateral OFC did not seem to affect the difficult choices where the values of the optionsare close to each other. Similar results were found in the performances of the monkeys withlesions to to lateral and those with lesions to medial OFC. Dissociable roles in PavlovianInstrumental Transfer were also observed.Notwithstanding the detailed neural architectures and basic neuronal descriptions usedin certain parts of this work, the neural mechanisms of all the behavioral paradigms werediscussed at a very simplistic level. Throughout the work, only appetitive behavior hasbeen described, whereas most of the processes described in this work are also known toaccount for aversive behaviors like avoiding punishments. In addition, the role of dopamineas the neurotransmitter facilitating learning has been extremely simplified. Furthermore,with multiple systems of reinforcement learning involved in the framework, it demandsfor a detailed role of how dopamine could have a differential effect on these systems. Oneof the most important elements of behavior that is not accounted for in the framework ismemory. In fact by complementing the framework with an existing computational accountof a minimal working memory model, the mechanisms of sustained activities to maintaingoals until achieving, aspects like giving up if the goal hasn’t been reached for a longtime etc, can be explored further. Adding an explicit memory to store minimum spatialand episodic information would allow the framework to explain more flexible behaviorslike pure goal-directed or opportunistic behaviors. However, that would require muchsophisticated implementations of motor loops where a desired position can be navigated.Nevertheless, the investigations into the observed evidences around OFC offer great insightinto understanding the very process of decision-making, value computation in general.By venturing into a realm of bio-inspired adaptive learning in an embodied virtual agent,describing the principles of motivation, goal-selection and self-evaluation, it is highlightedthat the field of reinforcement learning and artificial intelligence has a lot to gain fromstudying the role of prefrontal systems in decision-making.Cette thèse tente de construire un cadre de travail au niveau des systèmes informatiques qui aiderait à comprendre l’organisation des systèmes du cortex préfrontal (PFC) et desganglions de base (BG) et leurs interactions fonctionnelles dans le processus décisionnelet le comportement ciblé chez les humains. Environnement de jeu vidéo avec unagent artificiel, Minecraft est utilisé pour concevoir des expériences visant à tester lecadre dans un environnement qui pourrait être plus complexe et réaliste, si nécessaire.Malmo, une plateforme développée par Microsoft, permet de communiquer avec le jeuvidéo Minecraft pour concevoir les scénarios dans l’environnement et contrôler le comportementde l’agent. Le cadre, avec l’expérimentation virtuelle forme une architectureen boucle fermée pour l’étude du comportement animal de haut niveau. Il est soulignéque les principes génériques qui sous-tendent les comportements animaux flexibles donnentégalement un aperçu du développement de l’intelligence artificielle (I.A.) qui est plusgénérale et autonome dans la nature de l’apprentissage, en plus des systèmes actuels d’I.A.qui sont spécialisés dans une tâche particulière.Le comportement, d’un humain ou d’un animal, est un ensemble de réactions à uncertain stimulus (physique ou abstrait). Une réponse est essentiellement un choix parmiplusieurs options possibles ou simplement une décision entre faire un choix parmi les optionsdisponibles ou non. Les corrélats neuronaux de la prise de décision chez l’hommesont une question très recherchée dans de multiples domaines allant de la psychologie ducomportement, de la neuroéconomie et à l’intelligence artificielle (I.A.). En particulierdans le domaine de la neuroéconomie et de l’I.A., il y a une recherche énorme pour comprendreles fondements de la prise de décision dans le cerveau. Avec l’intérêt croissantpour la compréhension des substrats neuronaux de la prise de décision, de l’apprentissageet du comportement, du moins chez les mammifères d’ordre supérieur comme les rongeurs,les primates non humains et les humains, plus de recherche mène à des questions plus profondessur notre compréhension du processus décisionnel lui-même. Ce n’est pas si surprenant,étant donné qu’une espèce, dans une certaine mesure, dépend des mécanismesde sélection des actions ou de prise de décision pour sa survie dans un environnementincertain. L’homme est sans doute le décideur le plus souple et le plus adaptable qui peutapprendre la structure sous-jacente du monde, même si cette structure est cachée, et il peutadapter rapidement son comportement. Le cortex préfrontal (PFC) est à l’avant-gardede cette faculté et on croit qu’il a facilité cette évolution vers un répertoire plus large decomportements qui émergent des mécanismes sous-jacents de sélection des actions primitives.Il est souligné que l’étude de la prise de décisions complexes et réalistes dans desscénarios écologiques nécessitera des méthodes d’expérimentation plus sophistiquées queles simulations numériques classiques utilisées. Les expériences conçues dans Minecraftpeuvent être utilisées pour tester le cadre dans un environnement qui pourrait être pluscomplexe et réaliste, si nécessaire. La valeur ajoutée majeure d’un environnement virtuelet d’un agent qui y interagit est que les caractéristiques corporelles de l’agent peuventêtre soulignées (comme les besoins) et leur rôle dans la prise de décision basée sur lavaleur peut être discuté. Par la suite, le cadre, avec l’expérimentation virtuelle forme unearchitecture en boucle fermée pour l’étude du comportement animal de haut niveau.Le cadre des systèmes neuronaux dans ce travail repose sur la dynamique des réseauxentre les sous-systèmes de PFC et BG. On croit que le PFC joue un rôle crucial dans lesfonctions exécutives comme la planification, l’attention, le comportement ciblé, etc. LesBG sont un groupe de noyaux sous-corticaux qui ont fait l’objet d’études approfondiesdans le domaine du contrôle moteur et de la sélection d’action. Différentes régions du PFCet structures au sein des BG sont anatomiquement organisées, en association avec unerégion corticale sensorielle respective, en boucles parallèles et séparées (chacune d’entreelles étant appelée ici une boucle CBG). Ces boucles peuvent être, à un niveau élevé,divisées en 3 types : les boucles limbiques, les boucles associatives et les boucles sensorimotrices.Imaginez un animal interagissant avec des stimuli dans un environnement. Voiciquelques-unes des questions les plus pertinentes relatives à l’état actuel de l’animal en cequi concerne les stimuli présents : (i) Quel est (la valeur de) ce stimulus ? (Préférence)(ii) Pourquoi ce stimulus est-il pertinent pour mes besoins internes actuels ? (Besoin)(iii) Où est ce stimulus situé par rapport à ma référence dans l’environnement actuel(Orientation), et (iv) Comment atteindre le stimulus ’souhaité’ (Approche). Les boucleslimbiques répondent aux questions Quoi? et Pourquoi? Les boucles sensori-motrices sontconcernées par les questions Où? et Comment?. Les boucles associatives forment uneassociation multimodale de l’information sur l’état actuel, par exemple quel stimulus dansles boucles limbiques est représenté à quelle position dans les boucles motrices. En outre,dans chacune de ces boucles, comme la sous-région de la PFC représente l’objectif choisi,le processus de réalisation de l’objectif par une activation soutenue entre la sous-régionde la PFC et la région corticale sensorielle correspondante est décrit. L’expérimentation,en particulier virtuelle, permet de mettre en évidence ce phénomène en faisant preuve desouplesse dans l’adaptation du plan d’action une fois l’objectif choisi.Tout d’abord, un cadre global avec les boucles parallèles susmentionnées est mis enoeuvre. Les quatre boucles sont mises en oeuvre de manière algorithmique, décrivantles influences mutuelles entre chacune des sous-régions préfrontales. Il est important denoter que, bien qu’il n’y ait pas de hiérarchie explicite établie dans le système entre lesboucles, deux niveaux de hiérarchie pourraient implicitement apparaître. Premièrement,bien que les boucles motrices soient libres de prendre des décisions dans l’espace d’action,avec suffisamment d’apprentissage dans l’espace limbique, les décisions dans n’importelaquelle des boucles limbiques pourraient conduire les décisions dans l’espace sensorimoteurà travers la boucle associative. Deuxièmement, on suppose que la motivationfondamentale de l’animal est l’homéostasie interne, c’est-à-dire de maintenir ses besoinsinternes dans des limites acceptables. Ainsi, dans certaines situations, la motivationinterne peut conduire la dynamique dans les boucles limbiques, avec la boucle Pourquoi?.Les entrées pour les boucles CBG sont fournies par la perception sensorielle du cadrequi communique les informations fournies par Malmö à partir de l’environnement de jeuvidéo aux représentations correspondantes dans le cadre. De même, la sortie du cadre esttransformée en représentations Malmo appropriées des commandes d’action qui entraînentl’agent dans l’environnement. Le cadre cognitif étant décrit par plusieurs contraintesbiologiques, plusieurs adaptations ont été apportées à l’utilisation de la plate-forme deMalmo, en termes de perception sensorielle de l’environnement et de contrôle moteur del’agent.Ensuite, nous utilisons ce cadre pour étudier de plus près le rôle des boucles limbiquesdans la prise de décision guidée par les valeurs et le comportement ciblé. L’accent est missur les boucles limbiques. Les boucles associatives et sensori-motrices sont donc modéliséesde manière algorithmique, à l’aide de la plate-forme d’expérimentation pour le contrôlemoteur. Comme pour les boucles limbiques, le cortex orbitofrontal (OFC) est la partied’une boucle pour les préférences et le cortex cingulaire antérieur (ACC), pour les besoinsinternes. Ces boucles sont formées par leur contrepartie limbique en BG, striatum ventral(VS). Le VS etait fait l’objet de nombreuses études et on a signalé qu’il encode diverssubstrats de valeur, faisant ainsi partie intégrante de la prise de décisions fondées sur lesvaleurs. Des scénarios simplistes sont conçus dans l’environnement virtuel en utilisantl’agent et certains objets et des récompenses appétissantes dans l’environnement. Lesboucles limbiques ont été mises en oeuvre selon les modèles informatiques existants deprise de décision dans les BG et l’amygdale. Ainsi, le cadre et la plate-forme expérimentaleservent de banc d’essai

    Models and metaphors in neuroscience : the role of dopamine in reinforcement learning as a case study

    Get PDF
    Neuroscience makes use of many metaphors in its attempt to explain the relationship between our brain and our behaviour. In this thesis I contrast the most commonly used metaphor - that of computation driven by neuron action potentials - with an alternative view which seeks to understand the brain in terms of an agent learning from the reward signalled by neuromodulators. To explore this reinforcement learning model I construct computational models to assess one of its key claims — that the neurotransmitter dopamine signals unexpected reward, and that this signal is used by the brain to learn control of our movements and drive goal-directed behaviour. In this thesis I develop a selection of computational models that are motivated by either theoretical concepts or experimental data relating to the effects of dopamine. The first model implements a published dopamine-modulated spike timing-dependent plasticity mechanism but is unable to correctly solve the distal reward problem. I analyse why this model fails and suggest solutions. The second model, more closely linked to the empirical data attempts to investigate the relative contributions of firing rate and synaptic conductances to synaptic plasticity. I use experimental data to estimate how model neurons will be affected by dopamine modulation, and use the resulting computational model to predict the effect of dopamine on synaptic plasticity. The results suggest that dopamine modulation of synaptic conductances is more significant than modulation of excitability. The third model demonstrates how simple assumptions about the anatomy of the basal ganglia, and the electrophysiological effects of dopamine modulation can lead to reinforcement learning like behaviour. The model makes the novel prediction that working memory is an emergent feature of a reinforcement learning process. In the course of producing these models I find that both theoretically and empirically based models suffer from methodological problems that make it difficult to adequately support such fundamental claims as the reinforcement learning hypothesis. The conclusion that I draw from the modelling work is that it is neither possible, nor desirable to falsify the theoretical models used in neuroscience. Instead I argue that models and metaphors can be valued by how useful they are, independently of their truth. As a result I suggest that we ought to encourage a plurality of models and metaphors in neuroscience. In Chapter 7 I attempt to put this into practice by reviewing the other transmitter systems that modulate dopamine release, and use this as a basis for exploring the context of dopamine modulation and reward-driven behaviour. I draw on evidence to suggest that dopamine modulation can be seen as part of an extended stress response, and that the function of dopamine is to encourage the individual to engage in behaviours that take it away from homeostasis. I also propose that the function of dopamine can be interpreted in terms of behaviourally defining self and non-self, much in the same way as inflammation and antibody responses are said to do in immunology

    The Brain is a Suitability Probability Processor: A macro model of our neural control system

    Get PDF
    Our world is characterized by growing diversity and complexity, and the effort to manage our affairs in a good way becomes increasingly difficult. This is true for all spheres of life, including culture, economy, technology, science, politics, environment and daily grind. A corresponding development occurs to our understanding of the brain, which is the crucial organ to keep track of everything. The amount of domain specific findings about this organ grows dramatically, what takes preferably place by highly specialized research. But the holistic understanding of the brain is rather more challenged than supported by this development, resulting in a huge lack of knowledge on the systemic level of the neurosciences. Eckhard Schindler faces this dilemma by introducing a macro model of the brain. This is not only an attempt to improve the perception of our most crucial organ, but also to open a door for a better understanding of our species and for ease our life again.:Part 1 - The Brain as Suitability Probability Processor Introduction Neuro basics Purpose, perception and motor control Excitation, inhibition, pattern transformation and circuits Memory Homeostasis, pain, emotions and rewards The SPP model The emoti(onal-moti)vational system The control levels of the central nervous system The attention assessment controller (AAC) Efficiency through delegation and structuring Universal suitability probability evaluation Needs and library of associative-emotivational patterns Higher needs Needs and suitability probability evaluation Suitability probability evaluation and evolution The two types of consciousness Conscious experiences Individual and social consciousness The 4DI model A four-dimensional intelligence concept (4DI) Dynamics of the need hierarchy Social emotivational dependency chains The need for coherence Artificial needs versus growth needs Dynamics in the 3D tension field 3D tensions in the affluent society The tunnel vision paradox Emotivational amplification adaptation Fading consciousness in affluent contexts About the integrative ingredient of 4DI Toe-holds for other disciplines Part 2 - Excursions to the current state of science Introduction Basal ganglia (BG) and frontal cortex Emotion, motivation and memory Cognitive control and emotions Consciousness Psychology Brain and computer The biggest open questions Index of figures Index of tables Reference

    Salience and motivated behaviour in schizophrenia

    Get PDF
    Schizophrenia is a long-term psychotic disorder that affects approximately 1% of the population worldwide. Schizophrenia is characterised by negative symptoms, such as anhedonia and social withdrawal, and positive symptoms, such as hallucinations and delusions. The impact of schizophrenia reaches beyond the impaired social and cognitive function of the individual, affecting families and wider communities. Therefore, despite its low prevalence, there is a long history of multidisciplinary research investigating the causes of schizophrenia. The effect of antipsychotics in reducing the intensity of symptoms, through their antagonistic effect on dopamine, has led to dopaminergic based theories of schizophrenia. One such theory is based on aberrant salience, the assignment of importance to stimuli that have no intrinsic or learned value or salience. The aberrant salience hypothesis links hyperdopaminergic activation to symptoms of schizophrenia through the intermediary effect of motivational salience. Specifically, it is proposed that hyperdopaminergic activation in schizophrenia creates an aberrant motivational association with a stimulus, leading to cognitive explanations for the unexplained importance that contribute to the development of symptoms. Behavioural and neural evidence supports heightened aberrant salience in schizophrenia, although specific measures of aberrant salience have yielded inconsistent results. There is also a large body of evidence suggesting cognitive functions anchored in dopaminergic activation, such as reward processing and motivated behaviour, are impaired in schizophrenia. To date, however, the assumption that motivational salience mediates the relationship between hyperdopaminergic activation and aberrant salience has not been tested. The current project sought to elucidate the relationship between aberrant salience and motivational salience. The convergent validity among measures of aberrant salience (Salience Attribution Task and Aberrant Salience Inventory) and motivated behaviour (Effort Expenditure for Rewards Task and Stimulus Chase Task) were investigated in undergraduates. To assess whether aberrant salience, and the underlying relationship with motivational salience, is unique to schizophrenia, the same measures were completed by individuals diagnosed with schizophrenia, experiencing symptoms of anxiety, or unaffected by mental health. Whereas schizophrenia was associated with heightened aberrant salience, the aberrant salience indices lacked specificity, sensitivity, and convergent validity. Furthermore, whereas schizophrenia was associated with maladaptive motivated behaviour, there was limited evidence supporting a relationship between measures of aberrant salience and motivational salience. The failure to find evidence of such a relationship may be due to issues with the aberrant salience measures or the underlying assumption that motivational salience mediates aberrant salience. Further research is needed to develop measures of aberrant salience that are anchored to known neural systems underlying salience processing

    Predictive attenuation of tactile sensation.

    Get PDF
    It has been proposed that, in order to enhance sensitivity to novel information, the brain removes predictable components of sensory input. This thesis describes a series of psychophysical and behavioural studies investigating predictive filtering in the perception of touch. Using a novel force-matching paradigm, we demonstrate that self-generated tactile sensations are perceived as weaker than the same stimuli externally imposed. This attenuation is shown to be temporally tuned to the expected time of contact and modulated by the certainty with which a sensation can be attributed to self-action. We confirm experimentally that this attenuation results from a predictive, rather than postdictive, mechanism. Such a mechanism may predict the sensory consequences of action based on an internal model of the environment and an efference copy of the motor command. We investigate how prediction is acquired in a new environment and the coordinate systems in which the new environment is internally represented. Using a novel protocol of transcranial magnetic stimulation, we find evidence to suggest that the efference copy signal underlying the prediction arises upstream of primary motor cortex. Patients with schizophrenia are found to show less attenuation than healthy controls, consistent with models of the disease that propose an underlying deficit in sensory prediction. These experimental findings are discussed in relation to potential neural mechanisms of sensory filtering, and the many proposed roles for predictive mechanisms in human sensory and motor systems are reviewed
    corecore