85 research outputs found

    Radar signal categorization using a neural network

    Get PDF
    Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Convergence of Discrete-Time Cellular Neural Networks with Application to Image Processing

    Get PDF
    The paper considers a class of discrete-time cellular neural networks (DT-CNNs) obtained by applying Euler's discretization scheme to standard CNNs. Let T be the DT-CNN interconnection matrix which is defined by the feedback cloning template. The paper shows that a DT-CNN is convergent, i.e. each solution tends to an equilibrium point, when T is symmetric and, in the case where T + En is not positive-semidefinite, the step size of Euler's discretization scheme does not exceed a given bound (En is the n × n unit matrix). It is shown that two relevant properties hold as a consequence of the local and space-invariant interconnecting structure of a DT-CNN, namely: (1) the bound on the step size can be easily estimated via the elements of the DT-CNN feedback cloning template only; (2) the bound is independent of the DT-CNN dimension. These two properties make DT-CNNs very effective in view of computer simulations and for the practical applications to high-dimensional processing tasks. The obtained results are proved via Lyapunov approach and LaSalle's Invariance Principle in combination with some fundamental inequalities enjoyed by the projection operator on a convex set. The results are compared with previous ones in the literature on the convergence of DT-CNNs and also with those obtained for different neural network models as the Brain-State-in-a-Box model. Finally, the results on convergence are illustrated via the application to some relevant 2D and 1D DT-CNNs for image processing tasks

    Corticonic models of brain mechanisms underlying cognition and intelligence

    Get PDF
    The concern of this review is brain theory or more specifically, in its first part, a model of the cerebral cortex and the way it:(a) interacts with subcortical regions like the thalamus and the hippocampus to provide higher-level-brain functions that underlie cognition and intelligence, (b) handles and represents dynamical sensory patterns imposed by a constantly changing environment, (c) copes with the enormous number of such patterns encountered in a lifetime bymeans of dynamic memory that offers an immense number of stimulus-specific attractors for input patterns (stimuli) to select from, (d) selects an attractor through a process of “conjugation” of the input pattern with the dynamics of the thalamo–cortical loop, (e) distinguishes between redundant (structured)and non-redundant (random) inputs that are void of information, (f) can do categorical perception when there is access to vast associative memory laid out in the association cortex with the help of the hippocampus, and (g) makes use of “computation” at the edge of chaos and information driven annealing to achieve all this. Other features and implications of the concepts presented for the design of computational algorithms and machines with brain-like intelligence are also discussed. The material and results presented suggest, that a Parametrically Coupled Logistic Map network (PCLMN) is a minimal model of the thalamo–cortical complex and that marrying such a network to a suitable associative memory with re-entry or feedback forms a useful, albeit, abstract model of a cortical module of the brain that could facilitate building a simple artificial brain. In the second part of the review, the results of numerical simulations and drawn conclusions in the first part are linked to the most directly relevant works and views of other workers. What emerges is a picture of brain dynamics on the mesoscopic and macroscopic scales that gives a glimpse of the nature of the long sought after brain code underlying intelligence and other higher level brain functions. Physics of Life Reviews 4 (2007) 223–252 © 2007 Elsevier B.V. All rights reserved

    Hierarchical Associative Memory Based on Oscillatory Neural Network

    Get PDF
    In this thesis we explore algorithms and develop architectures based on emerging nano-device technologies for cognitive computing tasks such as recognition, classification, and vision. In particular we focus on pattern matching in high dimensional vector spaces to address the nearest neighbor search problem. Recent progress in nanotechnology provides us novel nano-devices with special nonlinear response characteristics that fit cognitive tasks better than general purpose computing. We build an associative memory (AM) by weakly coupling nano-oscillators as an oscillatory neural network and design a hierarchical tree structure to organize groups of AM units. For hierarchical recognition, we first examine an architecture where image patterns are partitioned into different receptive fields and processed by individual AM units in lower levels, and then abstracted using sparse coding techniques for recognition at higher levels. A second tree structure model is developed as a more scalable AM architecture for large data sets. In this model, patterns are classified by hierarchical k-means clustering and organized in hierarchical clusters. Then the recognition process is done by comparison between the input patterns and centroids identified in the clustering process. The tree is explored in a "depth-only" manner until the closest image pattern is output. We also extend this search technique to incorporate a branch-and-bound algorithm. The models and corresponding algorithms are tested on two standard face recognition data-sets. We show that the depth-only hierarchical model is very data-set dependent and performs with 97% or 67% recognition when compared to a single large associative memory, while the branch and bound search increases time by only a factor of two compared to the depth-only search

    Without Diagonal Nonlinear Requirements: The More General P

    Get PDF
    Continuous-time recurrent neural networks (RNNs) play an important part in practical applications. Recently, due to the ability of assuring the convergence of the equilibriums on the boundary line between stable and unstable, the study on the critical dynamics behaviors of RNNs has drawn especial attentions. In this paper, a new asymptotical stable theorem and two corollaries are presented for the unified RNNs, that is, the UPPAM RNNs. The analysis results given in this paper are under the generally P-critical conditions, which improve substantially upon the existing relevant critical convergence and stability results, and most important, the compulsory requirement of diagonally nonlinear activation mapping in most recent researches is removed. As a result, the theory in this paper can be applied more generally

    Building a Spiking Neural Network Model of the Basal Ganglia on SpiNNaker

    Get PDF
    We present a biologically-inspired and scalable model of the Basal Ganglia (BG) simulated on the SpiNNaker machine, a biologically-inspired low-power hardware platform allowing parallel, asynchronous computing. Our BG model consists of six cell populations, where the neuro-computational unit is a conductance-based Izhikevich spiking neuron; the number of neurons in each population is proportional to that reported in anatomical literature. This model is treated as a single-channel of action-selection in the BG, and is scaled-up to three channels with lateral cross-channel connections. When tested with two competing inputs, this three-channel model demonstrates action-selection behaviour. The SpiNNaker-based model is mapped exactly on to SpineML running on a conventional computer; both model responses show functional and qualitative similarity, thus validating the usability of SpiNNaker for simulating biologically-plausible networks. Furthermore, the SpiNNaker-based model simulates in real time for time-steps 1 ms; power dissipated during model execution is & #x2248;1.8 W

    Nonlinear neural networks: Principles, mechanisms, and architectures

    Full text link
    corecore