417 research outputs found

    Liquid state machine built of Hodgkin-Huxley neurons-pattern recognition and informational entropy

    Get PDF
    Neural networks built of Hodgkin-Huxley neurons are examined. Such structures behave like Liquid State Machines. They can effectively process geometrical patterns shown to artificial retina into precisely defined output. The analysis of output responses is performed in two ways: by means of Artificial Neural Network and by calculating informational entropy

    Computational ability of LSM ensemble in the model of mammalian visual system

    Get PDF
    Ensembles of artificial Hodgkin-Huxley neural microcircuits are examined. The networks discussed in this article simulate the cortex of the primate visual system. We use a modular architecture of the cortex divided into columns. The results of parallel simulations based on the liquid computing theory are presented in some detail. Separation ability of groups of neural microcircuits is observed. We show that such property may be useful for explaining some pattern recognition phenomena

    Investigating Mammalian Visual System with methods of informational theory

    Get PDF
    We examine a simple model of mammalian visual system. This structure is simulated by means of several hundred Hodgkin-Huxley neurons. We investigate signal processing properties of the model. Some methods taken from informational theory are applied to the analysis of Primary Visual Cortex' dynamics. Discussion of efficiency of such methods in two dimensional movement detection is presented in some detail

    Hebbian encoding in the biological visual system

    Get PDF
    We examined neural networks built of several hundred Hodgkin-Huxley neurons. The main aim of the research described below was to simulate memory processes occurring in hippocampus and biological visual system. In our model we chose the ancient Chinese I-Ching Oracle as a set of input patterns. Maps of Hebbian weights appearing on the output device of the model can be analysed by artificial neural networks playing a role of some kind of visual consciousness

    Parallel computing for brain simulation

    Get PDF
    [Abstract] Background: The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. Aims: For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. Conclusion: This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing.Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; GRC2014/049Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; R2014/039Instituto de Salud Carlos III; PI13/0028

    Liquid computing and analysis of sound signals

    Get PDF
    Liquid Computing Theory is a proposal of modelling the behaviour of neural microcircuits.It focuses on creating a group of neurons, known as a liquid layer, responsible for preprocessingof the signal that is being analysed. Specific information is achieved by the readout layers, task orientedgroups of neurons, taught to extract particular information from the state of liquid layer. TheLSMs have been used to analyse sound signals. The liquid layer was implemented in the PCSIM Simulator,and the readout layer has been prepared in the JNNS simulator. It could successfully recognisecertain sounds despite noises. Those results encourage further research of the computational potentialof Liquid State Machines including working in parallel with many readout layers

    A Survey of Spiking Neural Network Accelerator on FPGA

    Full text link
    Due to the ability to implement customized topology, FPGA is increasingly used to deploy SNNs in both embedded and high-performance applications. In this paper, we survey state-of-the-art SNN implementations and their applications on FPGA. We collect the recent widely-used spiking neuron models, network structures, and signal encoding formats, followed by the enumeration of related hardware design schemes for FPGA-based SNN implementations. Compared with the previous surveys, this manuscript enumerates the application instances that applied the above-mentioned technical schemes in recent research. Based on that, we discuss the actual acceleration potential of implementing SNN on FPGA. According to our above discussion, the upcoming trends are discussed in this paper and give a guideline for further advancement in related subjects

    Neural simulation pipeline: Enabling container-based simulations on-premise and in public clouds

    Get PDF
    In this study, we explore the simulation setup in computational neuroscience. We use GENESIS, a general purpose simulation engine for sub-cellular components and biochemical reactions, realistic neuron models, large neural networks, and system-level models. GENESIS supports developing and running computer simulations but leaves a gap for setting up today's larger and more complex models. The field of realistic models of brain networks has overgrown the simplicity of earliest models. The challenges include managing the complexity of software dependencies and various models, setting up model parameter values, storing the input parameters alongside the results, and providing execution statistics. Moreover, in the high performance computing (HPC) context, public cloud resources are becoming an alternative to the expensive on-premises clusters. We present Neural Simulation Pipeline (NSP), which facilitates the large-scale computer simulations and their deployment to multiple computing infrastructures using the infrastructure as the code (IaC) containerization approach. The authors demonstrate the effectiveness of NSP in a pattern recognition task programmed with GENESIS, through a custom-built visual system, called RetNet(8 × 5,1) that uses biologically plausible Hodgkin–Huxley spiking neurons. We evaluate the pipeline by performing 54 simulations executed on-premise, at the Hasso Plattner Institute's (HPI) Future Service-Oriented Computing (SOC) Lab, and through the Amazon Web Services (AWS), the biggest public cloud service provider in the world. We report on the non-containerized and containerized execution with Docker, as well as present the cost per simulation in AWS. The results show that our neural simulation pipeline can reduce entry barriers to neural simulations, making them more practical and cost-effective

    Are probabilistic spiking neural networks suitable for reservoir computing?

    Get PDF
    This study employs networks of stochastic spiking neurons as reservoirs for liquid state machines (LSM). We experimentally investigate the separation property of these reservoirs and show their ability to generalize classes of input signals. Similar to traditional LSM, probabilistic LSM (pLSM) have the separation property enabling them to distinguish between different classes of input stimuli. Furthermore, our results indicate some potential advantages of non-deterministic LSM by improving upon the separation ability of the liquid. Three non-deterministic neural models are considered and for each of them several parameter configurations are explored. We demonstrate some of the characteristics of pLSM and compare them to their deterministic counterparts. pLSM offer more flexibility due to the probabilistic parameters resulting in a better performance for some values of these parameters
    corecore