106 research outputs found

    BRAHMS: Novel middleware for integrated systems computation

    Get PDF
    Biological computational modellers are becoming increasingly interested in building large, eclectic models, including components on many different computational substrates, both biological and non-biological. At the same time, the rise of the philosophy of embodied modelling is generating a need to deploy biological models as controllers for robots in real-world environments. Finally, robotics engineers are beginning to find value in seconding biomimetic control strategies for use on practical robots. Together with the ubiquitous desire to make good on past software development effort, these trends are throwing up new challenges of intellectual and technological integration (for example across scales, across disciplines, and even across time) - challenges that are unmet by existing software frameworks. Here, we outline these challenges in detail, and go on to describe a newly developed software framework, BRAHMS. that meets them. BRAHMS is a tool for integrating computational process modules into a viable, computable system: its generality and flexibility facilitate integration across barriers, such as those described above, in a coherent and effective way. We go on to describe several cases where BRAHMS has been successfully deployed in practical situations. We also show excellent performance in comparison with a monolithic development approach. Additional benefits of developing in the framework include source code self-documentation, automatic coarse-grained parallelisation, cross-language integration, data logging, performance monitoring, and will include dynamic load-balancing and 'pause and continue' execution. BRAHMS is built on the nascent, and similarly general purpose, model markup language, SystemML. This will, in future, also facilitate repeatability and accountability (same answers ten years from now), transparent automatic software distribution, and interfacing with other SystemML tools. (C) 2009 Elsevier Ltd. All rights reserved

    Theory of Interaction of Memory Patterns in Layered Associative Networks

    Full text link
    A synfire chain is a network that can generate repeated spike patterns with millisecond precision. Although synfire chains with only one activity propagation mode have been intensively analyzed with several neuron models, those with several stable propagation modes have not been thoroughly investigated. By using the leaky integrate-and-fire neuron model, we constructed a layered associative network embedded with memory patterns. We analyzed the network dynamics with the Fokker-Planck equation. First, we addressed the stability of one memory pattern as a propagating spike volley. We showed that memory patterns propagate as pulse packets. Second, we investigated the activity when we activated two different memory patterns. Simultaneous activation of two memory patterns with the same strength led the propagating pattern to a mixed state. In contrast, when the activations had different strengths, the pulse packet converged to a two-peak state. Finally, we studied the effect of the preceding pulse packet on the following pulse packet. The following pulse packet was modified from its original activated memory pattern, and it converged to a two-peak state, mixed state or non-spike state depending on the time interval

    Sparse and Dense Encoding in Layered Associative Network of Spiking Neurons

    Full text link
    A synfire chain is a simple neural network model which can propagate stable synchronous spikes called a pulse packet and widely researched. However how synfire chains coexist in one network remains to be elucidated. We have studied the activity of a layered associative network of Leaky Integrate-and-Fire neurons in which connection we embed memory patterns by the Hebbian Learning. We analyzed their activity by the Fokker-Planck method. In our previous report, when a half of neurons belongs to each memory pattern (memory pattern rate F=0.5F=0.5), the temporal profiles of the network activity is split into temporally clustered groups called sublattices under certain input conditions. In this study, we show that when the network is sparsely connected (F<0.5F<0.5), synchronous firings of the memory pattern are promoted. On the contrary, the densely connected network (F>0.5F>0.5) inhibit synchronous firings. The sparseness and denseness also effect the basin of attraction and the storage capacity of the embedded memory patterns. We show that the sparsely(densely) connected networks enlarge(shrink) the basion of attraction and increase(decrease) the storage capacity

    Signal Propagation in Feedforward Neuronal Networks with Unreliable Synapses

    Full text link
    In this paper, we systematically investigate both the synfire propagation and firing rate propagation in feedforward neuronal network coupled in an all-to-all fashion. In contrast to most earlier work, where only reliable synaptic connections are considered, we mainly examine the effects of unreliable synapses on both types of neural activity propagation in this work. We first study networks composed of purely excitatory neurons. Our results show that both the successful transmission probability and excitatory synaptic strength largely influence the propagation of these two types of neural activities, and better tuning of these synaptic parameters makes the considered network support stable signal propagation. It is also found that noise has significant but different impacts on these two types of propagation. The additive Gaussian white noise has the tendency to reduce the precision of the synfire activity, whereas noise with appropriate intensity can enhance the performance of firing rate propagation. Further simulations indicate that the propagation dynamics of the considered neuronal network is not simply determined by the average amount of received neurotransmitter for each neuron in a time instant, but also largely influenced by the stochastic effect of neurotransmitter release. Second, we compare our results with those obtained in corresponding feedforward neuronal networks connected with reliable synapses but in a random coupling fashion. We confirm that some differences can be observed in these two different feedforward neuronal network models. Finally, we study the signal propagation in feedforward neuronal networks consisting of both excitatory and inhibitory neurons, and demonstrate that inhibition also plays an important role in signal propagation in the considered networks.Comment: 33pages, 16 figures; Journal of Computational Neuroscience (published

    The frequency of transforming growth factor-TGF-B gene polymorphisms in a normal southern Iranian population

    Get PDF
    Several single nucleotide polymorphisms (SNPs) of the transforming growth factor-β1 gene (TGFB1) have been reported. Determination of TGFB1 SNPs allele frequencies in different ethnic groups is useful for both population genetic analyses and association studies with immunological diseases. In this study, five SNPs of TGFB1 were determined in 325 individuals from a normal southern Iranian population using polymerase chain reaction-restriction fragment length polymorphism method. This population was in Hardy-Weinberg equilibrium for these SNPs. Of the 12 constructed haplotypes, GTCGC and GCTGC were the most frequent in the normal southern Iranian population. Comparison of genotype and allele frequencies of TGFB SNPs between Iranian and other populations (meta-analysis) showed significant differences, and in this case the southern Iranian population seems genetically similar to Caucasoid populations. However, neighbour-joining tree using Nei's genetic distances based on TGF-β1 allele frequencies showed that southern Iranians are genetically far from people from the USA, Germany, UK, Denmark and the Czech Republic. In conclusion, this is the first report of the distribution of TGFB1 SNPs in an Iranian population and the results of this investigation may provide useful information for both population genetic and disease studies. © 2008 The Authors

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    A Fokker-Planck formalism for diffusion with finite increments and absorbing boundaries

    Get PDF
    Gaussian white noise is frequently used to model fluctuations in physical systems. In Fokker-Planck theory, this leads to a vanishing probability density near the absorbing boundary of threshold models. Here we derive the boundary condition for the stationary density of a first-order stochastic differential equation for additive finite-grained Poisson noise and show that the response properties of threshold units are qualitatively altered. Applied to the integrate-and-fire neuron model, the response turns out to be instantaneous rather than exhibiting low-pass characteristics, highly non-linear, and asymmetric for excitation and inhibition. The novel mechanism is exhibited on the network level and is a generic property of pulse-coupled systems of threshold units.Comment: Consists of two parts: main article (3 figures) plus supplementary text (3 extra figures

    Implementing Rules with Aritificial Neurons

    Get PDF
    Rule based systems are an important class of computer languages. The brain, and more recently neuromorphic systems, is based on neurons. This paper describes a mechanism that converts a rule based system, specified by a user, to spiking neurons. The system can then be run in simulated neurons, producing the same output. The conversion is done making use of binary cell assemblies, and finite state automata. The binary cell assemblies, eventually implemented in neurons, implement the states. The rules are converted to a dictionary of facts, and simple finite state automata. This is then cached out to neurons. The neurons can be simulated on standard simulators, like NEST, or on neuromorphic hardware. Parallelism is a benefit of neural system, and rule based systems can take advantage of this parallelism. It is hoped that this work will support further exploration of parallel neural and rule based systems, and su
    • …
    corecore