92 research outputs found

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Neural network based architectures for aerospace applications

    Get PDF
    A brief history of the field of neural networks research is given and some simple concepts are described. In addition, some neural network based avionics research and development programs are reviewed. The need for the United States Air Force and NASA to assume a leadership role in supporting this technology is stressed

    Image segmentation using a neural network

    Get PDF
    An object extraction problem based on the Gibbs Random Field model is discussed. The Maximum a'posteriori probability (MAP) estimate of a scene based on a noise-corrupted realization is found to be computationally exponential in nature. A neural network, which is a modified version of that of Hopfield, is suggested for solving the problem. A single neuron is assigned to every pixel. Each neuron is supposed to be connected only to all of its nearest neighbours. The energy function of the network is designed in such a way that its minimum value corresponds to the MAP estimate of the scene. The dynamics of the network are described. A possible hardware realization of a neuron is also suggested. The technique is implemented on a set of noisy images and found to be highly robust and immune to noise

    An analog feedback associative memory

    Get PDF
    A method for the storage of analog vectors, i.e., vectors whose components are real-valued, is developed for the Hopfield continuous-time network. An important requirement is that each memory vector has to be an asymptotically stable (i.e. attractive) equilibrium of the network. Some of the limitations imposed by the continuous Hopfield model on the set of vectors that can be stored are pointed out. These limitations can be relieved by choosing a network containing visible as well as hidden units. An architecture consisting of several hidden layers and a visible layer, connected in a circular fashion, is considered. It is proved that the two-layer case is guaranteed to store any number of given analog vectors provided their number does not exceed 1 + the number of neurons in the hidden layer. A learning algorithm that correctly adjusts the locations of the equilibria and guarantees their asymptotic stability is developed. Simulation results confirm the effectiveness of the approach

    A Recurrent Cooperative/Competitive Field for Segmentation of Magnetic Resonance Brain Imagery

    Full text link
    The Grey-White Decision Network is introduced as an application of an on-center, off-surround recurrent cooperative/competitive network for segmentation of magnetic resonance imaging (MRI) brain images. The three layer dynamical system relaxes into a solution where each pixel is labeled as either grey matter, white matter, or "other" matter by considering raw input intensity, edge information, and neighbor interactions. This network is presented as an example of applying a recurrent cooperative/competitive field (RCCF) to a problem with multiple conflicting constraints. Simulations of the network and its phase plane analysis are presented

    Fault-tolerance of a neural network solving the traveling salesman problem

    Get PDF
    This study presents the results of a fault-injection experiment that stimulates a neural network solving the Traveling Salesman Problem (TSP). The network is based on a modified version of Hopfield's and Tank's original method. We define a performance characteristic for the TSP that allows an overall assessment of the solution quality for different city-distributions and problem sizes. Five different 10-, 20-, and 30- city cases are sued for the injection of up to 13 simultaneous stuck-at-0 and stuck-at-1 faults. The results of more than 4000 simulation-runs show the extreme fault-tolerance of the network, especially with respect to stuck-at-0 faults. One possible explanation for the overall surprising result is the redundancy of the problem representation

    Computer-generated Fourier holograms based on pulse-density modulation

    Get PDF

    Synaptic state matching: a dynamical architecture for predictive internal representation and feature perception

    Get PDF
    Here we consider the possibility that a fundamental function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single parsimonious computational framework. Beyond its utility as a potential model of cortical computation, artificial networks based on this principle have remarkable capacity for internalizing dynamical systems, making them useful in a variety of application domains including time-series prediction and machine intelligence

    Analog approach for the eigen-decomposition of positive definite matrices

    Get PDF
    AbstractThis paper proposes an analog approach for performing the eigen-decomposition of positive definite matrices. We show analytically and by simulations that the proposed circuit is guaranteed to converge to the desired eigenvectors and eigenvalues of positive definite matrices
    • 

    corecore