109 research outputs found

    PCSIM: A Parallel Simulation Environment for Neural Circuits Fully Integrated with Python

    Get PDF
    The Parallel Circuit SIMulator (PCSIM) is a software package for simulation of neural circuits. It is primarily designed for distributed simulation of large scale networks of spiking point neurons. Although its computational core is written in C++, PCSIM's primary interface is implemented in the Python programming language, which is a powerful programming environment and allows the user to easily integrate the neural circuit simulator with data analysis and visualization tools to manage the full neural modeling life cycle. The main focus of this paper is to describe PCSIM's full integration into Python and the benefits thereof. In particular we will investigate how the automatically generated bidirectional interface and PCSIM's object-oriented modular framework enable the user to adopt a hybrid modeling approach: using and extending PCSIM's functionality either employing pure Python or C++ and thus combining the advantages of both worlds. Furthermore, we describe several supplementary PCSIM packages written in pure Python and tailored towards setting up and analyzing neural simulations

    A Flattening Approach for Attributed Type Graphs with Inheritance in Algebraic Graph Transformation

    Get PDF
    The algebraic graph transformation approach was initiated in 1973 and supports the rule-based modification of graphs based on pushout constructions. The vertex and edge types used within the rules (or productions) as well as possible inheritance relationships defined between them are specified in the type graph. However, the termination proof can only be accomplished for graph transformation systems without inheritance relationships. Thus, all graph transformation systems with inheritance relationships in the type graph must be flattened. To this end, the algebraic graph transformation approach provides a formal description for how to flatten the type graph as well as a definition of abstract and concrete productions. In this paper, we will extend the definitions to also consider vertices in negative application conditions with finer node types and positive application conditions. Furthermore, we will prove the semantic equivalence of the original and the flattened graph transformation system. The whole flattening algorithm is then implemented in a prototype which supports an abstract or concrete flattening of a given graph transformation system. The prototype is finally evaluated within a case study

    Spiking neurons and the induction of finite state machines

    Get PDF
    AbstractWe discuss in this short survey article some current mathematical models from neurophysiology for the computational units of biological neural systems: neurons and synapses. These models are contrasted with the computational units of common artificial neural network models, which reflect the state of knowledge in neurophysiology 50 years ago. We discuss the problem of carrying out computations in circuits consisting of biologically realistic computational units, focusing on the biologically particularly relevant case of computations on time series. Finite state machines are frequently used in computer science as models for computations on time series. One may argue that these models provide a reasonable common conceptual basis for analyzing computations in computers and biological neural systems, although the emphasis in biological neural systems is shifted more towards asynchronous computation on analog time series. In the second half of this article some new computer experiments and theoretical results are discussed, which address the question whether a biological neural system can, in principle, learn to behave like a given simple finite state machine

    The origin and evolution of syntax errors in simple sequence flow models in BPMN

    Get PDF
    How do syntax errors emerge? What is the earliest moment that potential syntax errors can be detected? Which evolution do syntax errors go through during modeling? A provisional answer to these questions is formulated in this paper based on an investigation of a dataset containing the operational details of 126 modeling sessions. First, a list is composed of the different potential syntax errors. Second, a classification framework is built to categorize the errors according to their certainty and severity during modeling (i.e., in partial or complete models). Third, the origin and evolution of all syntax errors in the dataset are identified. This data is then used to collect a number of observations, which form a basis for future research

    Hume, John

    Get PDF
    A novel approach for unsupervised domain adaptation for neural networks is proposed. It relies on metric-based regularization of the learning process. The metric-based regularization aims at domain-invariant latent feature representations by means of maximizing the similarity between domain-specific activation distributions. The proposed metric results from modifying an integral probability metric such that it becomes less translation-sensitive on a polynomial function space. The metric has an intuitive interpretation in the dual space as the sum of differences of higher order central moments of the corresponding activation distributions. Under appropriate assumptions on the input distributions, error minimization is proven for the continuous case. As demonstrated by an analysis of standard benchmark experiments for sentiment analysis, object recognition and digit recognition, the outlined approach is robust regarding parameter changes and achieves higher classification accuracies than comparable approaches. The source code is available at https://github.com/wzell/mann.Comment: Preliminary version of this work appeared in ICL

    A Survey on Continuous Time Computations

    Full text link
    We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing results, and point to relevant references in the literature

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Computational modeling with spiking neural networks

    Get PDF
    This chapter reviews recent developments in the area of spiking neural networks (SNN) and summarizes the main contributions to this research field. We give background information about the functioning of biological neurons, discuss the most important mathematical neural models along with neural encoding techniques, learning algorithms, and applications of spiking neurons. As a specific application, the functioning of the evolving spiking neural network (eSNN) classification method is presented in detail and the principles of numerous eSNN based applications are highlighted and discussed

    Financial time series prediction using spiking neural networks

    Get PDF
    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. © 2014 Reid et al
    corecore