45 research outputs found

    Classification of action potentials in multi unit intrafascicular recordings using neural network pattern recognition techniques

    Get PDF
    Journal ArticleNeural network pattern-recognition techniques were applied to the problem of identifying the sources of action potentials in multi-unit neural recordings made from intrafascicular electrodes implanted in cats. The network was a three-layer connectionist machine that used digitized action potentials as input. On average, the network was able to reliably separate 6 or 7 units per recording. As the number of units present in the recording increased beyond this limit, the number separable by the network remained roughly constant. The results demonstrate the utility of neural networks for classifying neural activity in multi-unit recordings

    Distributed Hypothesis Testing, Attention Shifts and Transmitter Dynatmics During the Self-Organization of Brain Recognition Codes

    Full text link
    BP (89-A-1204); Defense Advanced Research Projects Agency (90-0083); National Science Foundation (IRI-90-00530); Air Force Office of Scientific Research (90-0175, 90-0128); Army Research Office (DAAL-03-88-K0088

    A Constructive, Incremental-Learning Network for Mixture Modeling and Classification

    Full text link
    Gaussian ARTMAP (GAM) is a supervised-learning adaptive resonance theory (ART) network that uses Gaussian-defined receptive fields. Like other ART networks, GAM incrementally learns and constructs a representation of sufficient complexity to solve a problem it is trained on. GAM's representation is a Gaussian mixture model of the input space, with learned mappings from the mixture components to output classes. We show a close relationship between GAM and the well-known Expectation-Maximization (EM) approach to mixture-modeling. GAM outperforms an EM classification algorithm on a classification benchmark, thereby demonstrating the advantage of the ART match criterion for regulating learning, and the ARTMAP match tracking operation for incorporate environmental feedback in supervised learning situations.Office of Naval Research (N00014-95-1-0409

    A Constructive, Incremental-Learning Network for Mixture Modeling and Classification

    Full text link
    Gaussian ARTMAP (GAM) is a supervised-learning adaptive resonance theory (ART) network that uses Gaussian-defined receptive fields. Like other ART networks, GAM incrementally learns and constructs a representation of sufficient complexity to solve a problem it is trained on. GAM's representation is a Gaussian mixture model of the input space, with learned mappings from the mixture components to output classes. We show a close relationship between GAM and the well-known Expectation-Maximization (EM) approach to mixture-modeling. GAM outperforms an EM classification algorithm on a classification benchmark, thereby demonstrating the advantage of the ART match criterion for regulating learning, and the ARTMAP match tracking operation for incorporate environmental feedback in supervised learning situations.Office of Naval Research (N00014-95-1-0409

    Sensor Signal Analysis By Neural Networks For Surveillance In Nuclear Reactors

    Get PDF
    The application of neural networks as a tool for reactor diagnostics is examined here. Reactor pump signals utilized in a wear-out monitoring system developed for early detection of the degradation of a pump shaft [17] are analyzed as a semi-benchmark test to study the feasibility of neural networks for monitoring and surveillance in nuclear reactors. The Adaptive Resonance Theory (ART 2 and ART 2-A) paradigm of neural networks is applied in this study. The signals are collected signals as well as generated signals simulating the wear progress. The wear-out monitoring system applies noise analysis techniques and is capable of distinguishing these signals apart and providing a measure of the progress of the degradation. This paper presents the results of the analysis of these data and provides an evaluation on the performance of ART 2-A and ART 2 for reactor signal analysis. The selection of ART 2 is due to its desired design principles such as unsupervised learning, stability-plasticity, search-direct access, and the match-reset tradeoffs. ART 2-A is selected for its speed. Two simulators are built. One is ART 2, and the other ART 2-A. The result is a success for both paradigms, and the study shows that ART 2-A is not only able to learn and distinguish the patterns from each other, its learning speed is also extremely fast despite the high-dimensional input spaces. © 1992 IEE

    General Neural Networks Dynamics are a Superposition of Gradient-like and Hamiltonian-like Systems

    Get PDF
    This report presents a formalism that enables the dynamics of a broad class of neural networks to be understood. A number of previous works have analyzed the Lyapunov stability of neural network models. This type of analysis shows that the excursion of the solutions from a stable point is bounded. The purpose of this work is to present a model of the dynamics that also describes the phase space behavior as well as the structural stability of the system. This is achieved by writing the general equations of the neural network dynamics as the sum of gradient-like and Hamiltonian-like systems. In this paper some important properties of both gradient-like and Hamiltonian-like systems are developed and then it is demonstrated that a broad class of neural network models are expressible in this form

    Reaction–diffusion chemistry implementation of associative memory neural network

    Get PDF
    Unconventional computing paradigms are typically very difficult to program. By implementing efficient parallel control architectures such as artificial neural networks, we show that it is possible to program unconventional paradigms with relative ease. The work presented implements correlation matrix memories (a form of artificial neural network based on associative memory) in reaction–diffusion chemistry, and shows that implementations of such artificial neural networks can be trained and act in a similar way to conventional implementations
    corecore