30 research outputs found

    Autapses enable temporal pattern recognition in spiking neural networks

    Get PDF
    © 2023 The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/Most sensory stimuli are temporal in structure. How action potentials encode the information incoming from sensory stimuli remains one of the central research questions in neuroscience. Although there is evidence that the precise timing of spikes represents information in spiking neuronal networks, information processing in spiking networks is still not fully understood. One feasible way to understand the working mechanism of a spiking network is to associate the structural connectivity of the network with the corresponding functional behaviour. This work demonstrates the structure-function mapping of spiking networks evolved (or handcrafted) for a temporal pattern recognition task. The task is to recognise a specific order of the input signals so that the Output neurone of the network spikes only for the correct placement and remains silent for all others. The minimal networks obtained for this task revealed the twofold importance of autapses in recognition; first, autapses simplify the switching among different network states. Second, autapses enable a network to maintain a network state, a form of memory. To show that the recognition task is accomplished by transitions between network states, we map the network states of a functional spiking neural network (SNN) onto the states of a finite-state transducer (FST, a formal model of computation that generates output symbols, here: spikes or no spikes at specific times, in response to input, here: a series of input signals). Finally, based on our understanding, we define rules for constructing the topology of a network handcrafted for recognising a subsequence of signals (pattern) in a particular order. The analysis of minimal networks recognising patterns of different lengths (two to six) revealed a positive correlation between the pattern length and the number of autaptic connections in the network. Furthermore, in agreement with the behaviour of neurones in the network, we were able to associate specific functional roles of locking, switching, and accepting to neurones

    The Evolution, Analysis, and Design of Minimal Spiking Neural Networks for Temporal Pattern Recognition

    Get PDF
    All sensory stimuli are temporal in structure. How a pattern of action potentials encodes the information received from the sensory stimuli is an important research question in neurosciencce. Although it is clear that information is carried by the number or the timing of spikes, the information processing in the nervous system is poorly understood. The desire to understand information processing in the animal brain led to the development of spiking neural networks (SNNs). Understanding information processing in spiking neural networks may give us an insight into the information processing in the animal brain. One way to understand the mechanisms which enable SNNs to perform a computational task is to associate the structural connectivity of the network with the corresponding functional behaviour. This work demonstrates the structure-function mapping of spiking networks evolved (or handcrafted) for recognising temporal patterns. The SNNs are composed of simple yet biologically meaningful adaptive exponential integrate-and-fire (AdEx) neurons. The computational task can be described as identifying a subsequence of three signals (say ABC) in a random input stream of signals ("ABBBCCBABABCBBCAC"). The topology and connection weights of the networks are optimised using a genetic algorithm such that the network output spikes only for the correct input pattern and remains silent for all others. The fitness function rewards the network output for spiking after receiving the correct pattern and penalises spikes elsewhere. To analyse the effect of noise, two types of noise are introduced during evolution: (i) random fluctuations of the membrane potential of neurons in the network at every network step, (ii) random variations of the duration of the silent interval between input signals. It has been observed that evolution in the presence of noise produced networks that were robust to perturbation of neuronal parameters. Moreover, the networks also developed a form of memory, enabling them to maintain network states in the absence of input activity. It has been demonstrated that the network states of an evolved network have a one-to-one correspondence with the states of a finite-state transducer (FST) { a model of computation for time-structured data. The analysis of networks indicated that the task of recognition is accomplished by transitions between network states. Evolution may overproduce synaptic connections, pruning these superfluous connections pronounced structural similarities among individuals obtained from different independent runs. Moreover, the analysis of the pruned networks highlighted that memory is a property of self-excitation in the network. Neurons with self-excitatory loops (also called autapses) could sustain spiking activity indefinitely in the absence of input activity. To recognise a pattern of length n, a network requires n+1 network states, where n states are maintained actively with autapses and the penultimate state is maintained passively by no activity in the network. Simultaneously, the role of other connections in the network is identified. Of particular interest, three interneurons in the network are found to have a specialized role: (i) the lock neuron is always active, preventing the output from spiking unless it is released by the penultimate signal in the correct pattern, exposing the output neuron to spike for the correct last signal, (ii) the switch neuron is responsible for switching the network between the inter-signal states and the start state, and (iii) the accept neuron produces spikes in the output neuron when the network receives the last correct input. It also sends a signal to the switch neuron, transforming the network back into the start state Understanding how information is processed in the evolved networks led to handcrafting network topologies for recognising more extended patterns. The proposed rules can extend network topologies to recognize temporal patterns up to length six. To validate the handcrafted topology, a genetic algorithm is used to optimise its connection weights. It has been observed that the maximum number of active neurons representing a state in the network increases with the pattern length. Therefore, the suggested rules can handcraft network topologies only up to length 6. Handcrafting network topologies, representing a network state with a fixed number of active neurons requires further investigation

    Discrete and Continuous Optimization Based on Hierarchical Artificial Bee Colony Optimizer

    Get PDF
    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization (HABC), to tackle complex high-dimensional problems. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operator is applied to enhance the global search ability between species. Experiments are conducted on a set of 20 continuous and discrete benchmark problems. The experimental results demonstrate remarkable performance of the HABC algorithm when compared with other six evolutionary algorithms

    A new neural network training algorithm based on artificial bee colony algorithm for nonlinear system identification

    Get PDF
    Artificial neural networks (ANNs), one of the most important artificial intelligence techniques, are used extensively in modeling many types of problems. A successful training process is required to create effective models with ANN. An effective training algorithm is essential for a successful training process. In this study, a new neural network training algorithm called the hybrid artificial bee colony algorithm based on effective scout bee stage (HABCES) was proposed. The HABCES algorithm includes four fundamental changes. Arithmetic crossover was used in the solution generation mechanisms of the employed bee and onlooker bee stages. The knowledge of the global best solution was utilized by arithmetic crossover. Again, this solution generation mechanism also has an adaptive step size. Limit is an important control parameter. In the standard ABC algorithm, it is constant throughout the optimization. In the HABCES algorithm, it was determined dynamically depending on the number of generations. Unlike the standard ABC algorithm, the HABCES algorithm used a solution generation mechanism based on the global best solution in the scout bee stage. Through these features, the HABCES algorithm has a strong local and global convergence ability. Firstly, the performance of the HABCES algorithm was analyzed on the solution of global optimization problems. Then, applications on the training of the ANN were carried out. ANN was trained using the HABCES algorithm for the identification of nonlinear static and dynamic systems. The performance of the HABCES algorithm was compared with the standard ABC, aABC and ABCES algorithms. The results showed that the performance of the HABCES algorithm was better in terms of solution quality and convergence speed. A performance increase of up to 69.57% was achieved by using the HABCES algorithm in the identification of static systems. This rate is 46.82% for the identification of dynamic systems

    Classification and Performance Study of Task Scheduling Algorithms in Cloud Computing Environment

    Get PDF
    Cloud computing is becoming very common in recent years and is growing rapidly due to its attractive benefits and features such as resource pooling, accessibility, availability, scalability, reliability, cost saving, security, flexibility, on-demand services, pay-per-use services, use from anywhere, quality of service, resilience, etc. With this rapid growth of cloud computing, there may exist too many users that require services or need to execute their tasks simultaneously by resources provided by service providers. To get these services with the best performance, and minimum cost, response time, makespan, effective use of resources, etc. an intelligent and efficient task scheduling technique is required and considered as one of the main and essential issues in the cloud computing environment. It is necessary for allocating tasks to the proper cloud resources and optimizing the overall system performance. To this end, researchers put huge efforts to develop several classes of scheduling algorithms to be suitable for the various computing environments and to satisfy the needs of the various types of individuals and organizations. This research article provides a classification of proposed scheduling strategies and developed algorithms in cloud computing environment along with the evaluation of their performance. A comparison of the performance of these algorithms with existing ones is also given. Additionally, the future research work in the reviewed articles (if available) is also pointed out. This research work includes a review of 88 task scheduling algorithms in cloud computing environment distributed over the seven scheduling classes suggested in this study. Each article deals with a novel scheduling technique and the performance improvement it introduces compared with previously existing task scheduling algorithms. Keywords: Cloud computing, Task scheduling, Load balancing, Makespan, Energy-aware, Turnaround time, Response time, Cost of task, QoS, Multi-objective. DOI: 10.7176/IKM/12-5-03 Publication date:September 30th 2022

    Gene selection for cancer classification with the help of bees

    Full text link

    Solving multiple sequence alignment problems by using a swarm intelligent optimization based approach

    Get PDF
    In this article, the alignment of multiple sequences is examined through swarm intelligence based an improved particle swarm optimization (PSO). A random heuristic technique for solving discrete optimization problems and realistic estimation was recently discovered in PSO. The PSO approach is a nature-inspired technique based on intelligence and swarm movement. Thus, each solution is encoded as “chromosomes” in the genetic algorithm (GA). Based on the optimization of the objective function, the fitness function is designed to maximize the suitable components of the sequence and reduce the unsuitable components of the sequence. The availability of a public benchmark data set such as the Bali base is seen as an assessment of the proposed system performance, with the potential for PSO to reveal problems in adapting to better performance. This proposed system is compared with few existing approaches such as deoxyribonucleic acid (DNA) or ribonucleic acid (RNA) alignment (DIALIGN), PILEUP8, hidden Markov model training (HMMT), rubber band technique-genetic algorithm (RBT-GA) and ML-PIMA. In many cases, the experimental results are well implemented in the proposed system compared to other existing approaches

    A modified scout bee for artificial bee colony algorithm and its performance on optimization problems

    Get PDF
    The artificial bee colony (ABC) is one of the swarm intelligence algorithms used to solve optimization problems which is inspired by the foraging behaviour of the honey bees. In this paper, artificial bee colony with the rate of change technique which models the behaviour of scout bee to improve the performance of the standard ABC in terms of exploration is introduced. The technique is called artificial bee colony rate of change (ABC-ROC) because the scout bee process depends on the rate of change on the performance graph, replace the parameter limit. The performance of ABC-ROC is analysed on a set of benchmark problems and also on the effect of the parameter colony size. Furthermore, the performance of ABC-ROC is compared with the state of the art algorithms

    Kernel methods for Monte Carlo

    Get PDF
    This thesis investigates the use of reproducing kernel Hilbert spaces (RKHS) in the context of Monte Carlo algorithms. The work proceeds in three main themes. Adaptive Monte Carlo proposals: We introduce and study two adaptive Markov chain Monte Carlo (MCMC) algorithms to sample from target distributions with non-linear support and intractable gradients. Our algorithms, generalisations of random walk Metropolis and Hamiltonian Monte Carlo, adaptively learn local covariance and gradient structure respectively, by modelling past samples in an RKHS. We further show how to embed these methods into the sequential Monte Carlo framework. Efficient and principled score estimation: We propose methods for fitting an RKHS exponential family model that work by fitting the gradient of the log density, the score, thus avoiding the need to compute a normalization constant. While the problem is of general interest, here we focus on its embedding into the adaptive MCMC context from above. We improve the computational efficiency of an earlier solution with two novel fast approximation schemes without guarantees, and a low-rank, Nyström-like solution. The latter retains the consistency and convergence rates of the exact solution, at lower computational cost. Goodness-of-fit testing: We propose a non-parametric statistical test for goodness-of-fit. The measure is a divergence constructed via Stein's method using functions from an RKHS. We derive a statistical test, both for i.i.d. and non-i.i.d. samples, and apply the test to quantifying convergence of approximate MCMC methods, statistical model criticism, and evaluating accuracy in non-parametric score estimation
    corecore