2,313 research outputs found

    CuPit - a parallel language for neural algorithms: language reference and tutorial

    Get PDF
    CuPit is a parallel programming language with two main design goals: 1. to allow the simple, problem-adequate formulation of learning algorithms for neural networks with focus on algorithms that change the topology of the underlying neural network during the learning process and 2. to allow the generation of efficient code for massively parallel machines from a completely machine-independent program description, in particular to maximize both data locality and load balancing even for irregular neural networks. The idea to achieve these goals lies in the programming model: CuPit programs are object-centered, with connections and nodes of a graph (which is the neural network) being the objects. Algorithms are based on parallel local computations in the nodes and connections and communication along the connections (plus broadcast and reduction operations). This report describes the design considerations and the resulting language definition and discusses in detail a tutorial example program

    Small-variance asymptotics for Bayesian neural networks

    Get PDF
    Bayesian neural networks (BNNs) are a rich and flexible class of models that have several advantages over standard feedforward networks, but are typically expensive to train on large-scale data. In this thesis, we explore the use of small-variance asymptotics-an approach to yielding fast algorithms from probabilistic models-on various Bayesian neural network models. We first demonstrate how small-variance asymptotics shows precise connections between standard neural networks and BNNs; for example, particular sampling algorithms for BNNs reduce to standard backpropagation in the small-variance limit. We then explore a more complex BNN where the number of hidden units is additionally treated as a random variable in the model. While standard sampling schemes would be too slow to be practical, our asymptotic approach yields a simple method for extending standard backpropagation to the case where the number of hidden units is not fixed. We show on several data sets that the resulting algorithm has benefits over backpropagation on networks with a fixed architecture.2019-01-02T00:00:00

    Pseudorehearsal in actor-critic agents with neural network function approximation

    Full text link
    Catastrophic forgetting has a significant negative impact in reinforcement learning. The purpose of this study is to investigate how pseudorehearsal can change performance of an actor-critic agent with neural-network function approximation. We tested agent in a pole balancing task and compared different pseudorehearsal approaches. We have found that pseudorehearsal can assist learning and decrease forgetting

    Pseudorehearsal in actor-critic agents with neural network function approximation

    Get PDF
    Catastrophic forgetting has a significant negative impact in reinforcement learning. The purpose of this study is to investigate how pseudorehearsal can change performance of an actor-critic agent with neural-network function approximation. We tested agent in a pole balancing task and compared different pseudorehearsal approaches. We have found that pseudorehearsal can assist learning and decrease forgetting

    Using Optimized Features for Modified Optical Backpropagation Neural Network Model in Online Handwritten Character Recognition System

    Get PDF
    One major problem encountered by researchers in developing character recognition system is selection of efficient features (optimal features). In this paper, Particle Swarm Optimization (PSO) is proposed for feature selection. However, backpropagation algorithm has been reported to be an effective and most widely used supervised training algorithm for multi-layered feedforward neural networks but has the shortcomings of longer training time and entrapment into a local minimal. Several research works have been proposed to improve this algorithm but some of these research works were based on ‘learning parameter’ which in some cases slowed down the training process. Hence, this paper has focused on alleviating the problem of standard backpropagation algorithm based on ‘error adjustment’. To this effect, PSO is integrated with a ‘Modified Optical Backpropagation (MOBP)’ neural network to enhancement the performance of the classifier in terms of recognition accuracy and recognition time.  Experiments were conducted on MOBP neural network and PSO-based MOBP classifiers using 6,200 handwritten character samples (uppercase (A-Z), lowercase (a-z) English alphabet and 10 digits (0-9)) collected from 100 subjects using G-Pen 450 digitizer and the system was tested with 100 character samples written by people who did not participate in the initial data acquisition process. Experimental results show promising results for the PSO-based MOBP classifier in terms of the performance measures. Keywords: Artificial Neural Network, Feature Extraction, Feature Selection, Particle Swarm Optimization, Modified Optical Backpropagation

    Connectionist simulation of attitude learning: Asymmetries in the acquisition of positive and negative evaluations

    Get PDF
    Connectionist computer simulation was employed to explore the notion that, if attitudes guide approach and avoidance behaviors, false negative beliefs are likely to remain uncorrected for longer than false positive beliefs. In Study 1, the authors trained a three-layer neural network to discriminate "good" and "bad" inputs distributed across a two-dimensional space. "Full feedback" training, whereby connection weights were modified to reduce error after every trial, resulted in perfect discrimination. "Contingent feedback," whereby connection weights were only updated following outputs representing approach behavior, led to several false negative errors (good inputs misclassified as bad). In Study 2, the network was redesigned to distinguish a system for learning evaluations from a mechanism for selecting actions. Biasing action selection toward approach eliminated the asymmetry between learning of good and bad inputs under contingent feedback. Implications for various attitudinal phenomena and biases in social cognition are discussed

    Energy Demand Analysis and Forecast

    Get PDF
    • 

    corecore