40 research outputs found

    A Tale of Two Animats: What does it take to have goals?

    Full text link
    What does it take for a system, biological or not, to have goals? Here, this question is approached in the context of in silico artificial evolution. By examining the informational and causal properties of artificial organisms ('animats') controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning. The focus lies on comparing two types of Markov Brains that evolved in the same simple environment: one with purely feedforward connections between its elements, the other with an integrated set of elements that causally constrain each other. While both types of brains 'process' information about their environment and are equally fit, only the integrated one forms a causally autonomous entity above a background of external influences. This suggests that to assess whether goals are meaningful for a system itself, it is important to understand what the system is, rather than what it does.Comment: This article is a contribution to the FQXi 2016-2017 essay contest "Wandering Towards a Goal

    Changes of Mind in an Attractor Network of Decision-Making

    Get PDF
    Attractor networks successfully account for psychophysical and neurophysiological data in various decision-making tasks. Especially their ability to model persistent activity, a property of many neurons involved in decision-making, distinguishes them from other approaches. Stable decision attractors are, however, counterintuitive to changes of mind. Here we demonstrate that a biophysically-realistic attractor network with spiking neurons, in its itinerant transients towards the choice attractors, can replicate changes of mind observed recently during a two-alternative random-dot motion (RDM) task. Based on the assumption that the brain continues to evaluate available evidence after the initiation of a decision, the network predicts neural activity during changes of mind and accurately simulates reaction times, performance and percentage of changes dependent on difficulty. Moreover, the model suggests a low decision threshold and high incoming activity that drives the brain region involved in the decision-making process into a dynamical regime close to a bifurcation, which up to now lacked evidence for physiological relevance. Thereby, we further affirmed the general conformance of attractor networks with higher level neural processes and offer experimental predictions to distinguish nonlinear attractor from linear diffusion models

    Dynamic excitatory and inhibitory gain modulation can produce flexible, robust and optimal decision-making

    Get PDF
    <div><p>Behavioural and neurophysiological studies in primates have increasingly shown the involvement of urgency signals during the temporal integration of sensory evidence in perceptual decision-making. Neuronal correlates of such signals have been found in the parietal cortex, and in separate studies, demonstrated attention-induced gain modulation of both excitatory and inhibitory neurons. Although previous computational models of decision-making have incorporated gain modulation, their abstract forms do not permit an understanding of the contribution of inhibitory gain modulation. Thus, the effects of co-modulating both excitatory and inhibitory neuronal gains on decision-making dynamics and behavioural performance remain unclear. In this work, we incorporate time-dependent co-modulation of the gains of both excitatory and inhibitory neurons into our previous biologically based decision circuit model. We base our computational study in the context of two classic motion-discrimination tasks performed in animals. Our model shows that by simultaneously increasing the gains of both excitatory and inhibitory neurons, a variety of the observed dynamic neuronal firing activities can be replicated. In particular, the model can exhibit winner-take-all decision-making behaviour with higher firing rates and within a significantly more robust model parameter range. It also exhibits short-tailed reaction time distributions even when operating near a dynamical bifurcation point. The model further shows that neuronal gain modulation can compensate for weaker recurrent excitation in a decision neural circuit, and support decision formation and storage. Higher neuronal gain is also suggested in the more cognitively demanding reaction time than in the fixed delay version of the task. Using the exact temporal delays from the animal experiments, fast recruitment of gain co-modulation is shown to maximize reward rate, with a timescale that is surprisingly near the experimentally fitted value. Our work provides insights into the simultaneous and rapid modulation of excitatory and inhibitory neuronal gains, which enables flexible, robust, and optimal decision-making.</p></div

    A Probabilistic, Distributed, Recursive Mechanism for Decision-making in the Brain

    Get PDF
    Decision formation recruits many brain regions, but the procedure they jointly execute is unknown. Here we characterize its essential composition, using as a framework a novel recursive Bayesian algorithm that makes decisions based on spike-trains with the statistics of those in sensory cortex (MT). Using it to simulate the random-dot-motion task, we demonstrate it quantitatively replicates the choice behaviour of monkeys, whilst predicting losses of otherwise usable information from MT. Its architecture maps to the recurrent cortico-basal-ganglia-thalamo-cortical loops, whose components are all implicated in decision-making. We show that the dynamics of its mapped computations match those of neural activity in the sensorimotor cortex and striatum during decisions, and forecast those of basal ganglia output and thalamus. This also predicts which aspects of neural dynamics are and are not part of inference. Our single-equation algorithm is probabilistic, distributed, recursive, and parallel. Its success at capturing anatomy, behaviour, and electrophysiology suggests that the mechanism implemented by the brain has these same characteristics

    A simple method for quantitative calcium imaging in unperturbed developing neurons.

    No full text
    Calcium imaging has been widely used to address questions of neuronal function and development. To gain deeper insights into the actions of calcium as a second messenger, but also to measure synaptic function, it is necessary to quantify the level of calcium at rest and during calcium transients. While quantification of calcium levels is straightforward when using ratiometric calcium indicators, these dyes have several draw-backs due to their short wavelength excitation spectra, such as light scattering and cytotoxicity. In contrast, many single-wavelength indicators exhibit superior photostability, low phototoxicity, extended dynamic ranges and very high signal to noise ratios. However, quantifying calcium levels in unperturbed neurons has not been performed with these indicators. Here, we explore a new approach for determining the calcium concentration at rest as well as calcium rises during evoked and spontaneous neuronal activity in unperturbed developing neurons using a single-wavelength calcium indicator. We show that measuring the maximal fluorescence at the end of an imaging experiment allows determining calcium levels with high resolution. Specifically, we assessed the limits of calcium measurements with a CCD camera in small neuronal processes and found that even in small diameter dendrites and spines the intracellular calcium concentration and its changes can be estimated accurately. This approach may not only allow mapping patterns of neuronal activity quantitatively with the resolution of single synapses and a few tens of milliseconds, but also facilitate investigating the role of calcium as a second messenger

    Fitness and neural complexity of animats exposed to environmental change

    No full text

    Mathematica script for community dynamics model

    No full text
    This script runs in Mathematica by Wolfram. It is used to implement the model community described in the Appendix as an illustration of an autocatalytic ecological system

    Schema therapy-informed social interaction training. Interventional approach for adults with high-functioning autism

    No full text
    Autism spectrum disorders are neurodevelopmental disorders, which are characterized by impairments of social interaction and communication. So-called high-functioning autism (HFA) is also characterized by these social difficulties, but in the absence of intellectual impairments. Individuals with HFA are not always diagnosed in childhood and adolescence, which might be due to compensatory strategies these individuals can develop, because of their intact intellectual capacities. In contrast to childhood and adolescence, very little is known about the effectiveness of psychotherapeutic treatment options for adults with HFA. Cognitive behavioral therapy suggests social competence training, but it is unclear whether this approach actually helps to change social interaction skills. In view of the promising findings on the effectiveness of schema therapy in various psychiatric disorders, we suggest that schema therapy-informed social interaction training (STISI) should be applied in patients with HFA. A fundamental idea is to reduce the complexity of social interactions by teaching patients with HFA to identify schema coping behavior in their non-autistic interaction partners and to learn ways of responding. It is hypothesized and first feedback from patients indicates that adults with HFA benefit from this strategy. It may help them to resolve interactional situations that could otherwise have negative consequences

    Evolution of Integrated Causal Structures in Animats Exposed to Environments of Increasing Complexity

    No full text
    Natural selection favors the evolution of brains that can capture fitness-relevant features of the environment's causal structure. We investigated the evolution of small, adaptive logic-gate networks (“animats”) in task environments where falling blocks of different sizes have to be caught or avoided in a ‘Tetris-like’ game. Solving these tasks requires the integration of sensor inputs and memory. Evolved networks were evaluated using measures of information integration, including the number of evolved concepts and the total amount of integrated conceptual information. The results show that, over the course of the animats' adaptation, i) the number of concepts grows; ii) integrated conceptual information increases; iii) this increase depends on the complexity of the environment, especially on the requirement for sequential memory. These results suggest that the need to capture the causal structure of a rich environment, given limited sensors and internal mechanisms, is an important driving force for organisms to develop highly integrated networks (“brains”) with many concepts, leading to an increase in their internal complexity. © 2014 Albantakis et al
    corecore