2,140 research outputs found

    Large-Scale Modeling – a Tool for Conquering the Complexity of the Brain

    Get PDF
    Is there any hope of achieving a thorough understanding of higher functions such as perception, memory, thought and emotion or is the stunning complexity of the brain a barrier which will limit such efforts for the foreseeable future? In this perspective we discuss methods to handle complexity, approaches to model building, and point to detailed large-scale models as a new contribution to the toolbox of the computational neuroscientist. We elucidate some aspects which distinguishes large-scale models and some of the technological challenges which they entail

    BRAHMS: Novel middleware for integrated systems computation

    Get PDF
    Biological computational modellers are becoming increasingly interested in building large, eclectic models, including components on many different computational substrates, both biological and non-biological. At the same time, the rise of the philosophy of embodied modelling is generating a need to deploy biological models as controllers for robots in real-world environments. Finally, robotics engineers are beginning to find value in seconding biomimetic control strategies for use on practical robots. Together with the ubiquitous desire to make good on past software development effort, these trends are throwing up new challenges of intellectual and technological integration (for example across scales, across disciplines, and even across time) - challenges that are unmet by existing software frameworks. Here, we outline these challenges in detail, and go on to describe a newly developed software framework, BRAHMS. that meets them. BRAHMS is a tool for integrating computational process modules into a viable, computable system: its generality and flexibility facilitate integration across barriers, such as those described above, in a coherent and effective way. We go on to describe several cases where BRAHMS has been successfully deployed in practical situations. We also show excellent performance in comparison with a monolithic development approach. Additional benefits of developing in the framework include source code self-documentation, automatic coarse-grained parallelisation, cross-language integration, data logging, performance monitoring, and will include dynamic load-balancing and 'pause and continue' execution. BRAHMS is built on the nascent, and similarly general purpose, model markup language, SystemML. This will, in future, also facilitate repeatability and accountability (same answers ten years from now), transparent automatic software distribution, and interfacing with other SystemML tools. (C) 2009 Elsevier Ltd. All rights reserved

    Dopamine-modulated dynamic cell assemblies generated by the GABAergic striatal microcircuit

    Get PDF
    The striatum, the principal input structure of the basal ganglia, is crucial to both motor control and learning. It receives convergent input from all over the neocortex, hippocampal formation, amygdala and thalamus, and is the primary recipient of dopamine in the brain. Within the striatum is a GABAergic microcircuit that acts upon these inputs, formed by the dominant medium-spiny projection neurons (MSNs) and fast-spiking interneurons (FSIs). There has been little progress in understanding the computations it performs, hampered by the non-laminar structure that prevents identification of a repeating canonical microcircuit. We here begin the identification of potential dynamically-defined computational elements within the striatum. We construct a new three-dimensional model of the striatal microcircuit's connectivity, and instantiate this with our dopamine-modulated neuron models of the MSNs and FSIs. A new model of gap junctions between the FSIs is introduced and tuned to experimental data. We introduce a novel multiple spike-train analysis method, and apply this to the outputs of the model to find groups of synchronised neurons at multiple time-scales. We find that, with realistic in vivo background input, small assemblies of synchronised MSNs spontaneously appear, consistent with experimental observations, and that the number of assemblies and the time-scale of synchronisation is strongly dependent on the simulated concentration of dopamine. We also show that feed-forward inhibition from the FSIs counter-intuitively increases the firing rate of the MSNs. Such small cell assemblies forming spontaneously only in the absence of dopamine may contribute to motor control problems seen in humans and animals following a loss of dopamine cells. (C) 2009 Elsevier Ltd. All rights reserved

    Anthropocene, Capitalocene, Machinocene: Illusions of instrumental reason

    Get PDF
    In their seminal work, Dialectics of Enlightenment, Horkheimer and Adorno interpreted capitalism as the irrational monetization of nature. In the present work, I analyze three 21st century concepts, Anthropocene, Capitalocene and Machinocene, in light of Horkheimer and Adorno’s arguments and recent arguments from the philosophy of biology. The analysis reveals a remarkable prescience of the term “instrumental reason”, which is present in each of the three concepts in a profound and cryptic way. In my interpretation, the term describes the propensity of science based on the notion of physicalism to interpret nature as the machine analyzable and programmable by the human reason. As a result, the Anthropocene concept is built around the mechanicist model, which may be presented as the metaphor of the car without brakes. In a similar fashion, the Machinocene concept predicts the emergence of the mechanical mind, which will dominate nature in the near future. Finally, the Capitalocene concept turns a perfectly rational ambition to expand knowledge into an irrational obsession with over-knowledge, by employing the institutionalized science as the engine of capitalism without brakes. The common denominator of all three concepts is the irrational propensity to legitimize self-destruction. Potential avenues for countering the effects of “instrumental reason” are suggested

    Run-Time Interoperability Between Neuronal Network Simulators Based on the MUSIC Framework

    Get PDF
    MUSIC is a standard API allowing large scale neuron simulators to exchange data within a parallel computer during runtime. A pilot implementation of this API has been released as open source. We provide experiences from the implementation of MUSIC interfaces for two neuronal network simulators of different kinds, NEST and MOOSE. A multi-simulation of a cortico-striatal network model involving both simulators is performed, demonstrating how MUSIC can promote inter-operability between models written for different simulators and how these can be re-used to build a larger model system. Benchmarks show that the MUSIC pilot implementation provides efficient data transfer in a cluster computer with good scaling. We conclude that MUSIC fulfills the design goal that it should be simple to adapt existing simulators to use MUSIC. In addition, since the MUSIC API enforces independence of the applications, the multi-simulation could be built from pluggable component modules without adaptation of the components to each other in terms of simulation time-step or topology of connections between the modules

    An efficient automated parameter tuning framework for spiking neural networks

    Get PDF
    As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier

    Apperceptive patterning: Artefaction, extensional beliefs and cognitive scaffolding

    Get PDF
    In “Psychopower and Ordinary Madness” my ambition, as it relates to Bernard Stiegler’s recent literature, was twofold: 1) critiquing Stiegler’s work on exosomatization and artefactual posthumanism—or, more specifically, nonhumanism—to problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Bailly’s conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandom’s conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestani’s deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog)

    Reconstructing the three-dimensional GABAergic microcircuit of the striatum

    Get PDF
    A system's wiring constrains its dynamics, yet modelling of neural structures often overlooks the specific networks formed by their neurons. We developed an approach for constructing anatomically realistic networks and reconstructed the GABAergic microcircuit formed by the medium spiny neurons (MSNs) and fast-spiking interneurons (FSIs) of the adult rat striatum. We grew dendrite and axon models for these neurons and extracted probabilities for the presence of these neurites as a function of distance from the soma. From these, we found the probabilities of intersection between the neurites of two neurons given their inter-somatic distance, and used these to construct three-dimensional striatal networks. The MSN dendrite models predicted that half of all dendritic spines are within 100 mu m of the soma. The constructed networks predict distributions of gap junctions between FSI dendrites, synaptic contacts between MSNs, and synaptic inputs from FSIs to MSNs that are consistent with current estimates. The models predict that to achieve this, FSIs should be at most 1% of the striatal population. They also show that the striatum is sparsely connected: FSI-MSN and MSN-MSN contacts respectively form 7% and 1.7% of all possible connections. The models predict two striking network properties: the dominant GABAergic input to a MSN arises from neurons with somas at the edge of its dendritic field; and FSIs are interconnected on two different spatial scales: locally by gap junctions and distally by synapses. We show that both properties influence striatal dynamics: the most potent inhibition of a MSN arises from a region of striatum at the edge of its dendritic field; and the combination of local gap junction and distal synaptic networks between FSIs sets a robust input-output regime for the MSN population. Our models thus intimately link striatal micro-anatomy to its dynamics, providing a biologically grounded platform for further study
    corecore