1,171 research outputs found

    From Bidirectional Associative Memory to a noise-tolerant, robust Protein Processor Associative Memory

    Get PDF
    AbstractProtein Processor Associative Memory (PPAM) is a novel architecture for learning associations incrementally and online and performing fast, reliable, scalable hetero-associative recall. This paper presents a comparison of the PPAM with the Bidirectional Associative Memory (BAM), both with Kosko's original training algorithm and also with the more popular Pseudo-Relaxation Learning Algorithm for BAM (PRLAB). It also compares the PPAM with a more recent associative memory architecture called SOIAM. Results of training for object-avoidance are presented from simulations using player/stage and are verified by actual implementations on the E-Puck mobile robot. Finally, we show how the PPAM is capable of achieving an increase in performance without using the typical weighted-sum arithmetic operations or indeed any arithmetic operations

    The relaxation method for learning in artificial neural networks

    Get PDF
    A new mathematical approach for deriving learning algorithms for various neural network models including the Hopfield model, Bidirectional Associative Memory, Dynamic Heteroassociative Neural Memory, and Radial Basis Function Networks is presented. The mathematical approach is based on the relaxation method for solving systems of linear inequalities. The newly developed learning algorithms are fast and they guarantee convergence to a solution in a finite number of steps. The new algorithms are highly insensitive to choice of parameters and the initial set of weights. They also exhibit high scalability on binary random patterns. Rigorous mathematical foundations for the new algorithms and their simulation studies are included

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Brain-inspired self-organization with cellular neuromorphic computing for multimodal unsupervised learning

    Full text link
    Cortical plasticity is one of the main features that enable our ability to learn and adapt in our environment. Indeed, the cerebral cortex self-organizes itself through structural and synaptic plasticity mechanisms that are very likely at the basis of an extremely interesting characteristic of the human brain development: the multimodal association. In spite of the diversity of the sensory modalities, like sight, sound and touch, the brain arrives at the same concepts (convergence). Moreover, biological observations show that one modality can activate the internal representation of another modality when both are correlated (divergence). In this work, we propose the Reentrant Self-Organizing Map (ReSOM), a brain-inspired neural system based on the reentry theory using Self-Organizing Maps and Hebbian-like learning. We propose and compare different computational methods for unsupervised learning and inference, then quantify the gain of the ReSOM in a multimodal classification task. The divergence mechanism is used to label one modality based on the other, while the convergence mechanism is used to improve the overall accuracy of the system. We perform our experiments on a constructed written/spoken digits database and a DVS/EMG hand gestures database. The proposed model is implemented on a cellular neuromorphic architecture that enables distributed computing with local connectivity. We show the gain of the so-called hardware plasticity induced by the ReSOM, where the system's topology is not fixed by the user but learned along the system's experience through self-organization.Comment: Preprin

    Financial distress prediction using the hybrid associative memory with translation

    Get PDF
    This paper presents an alternative technique for financial distress prediction systems. The method is based on a type of neural network, which is called hybrid associative memory with translation. While many different neural network architectures have successfully been used to predict credit risk and corporate failure, the power of associative memories for financial decision-making has not been explored in any depth as yet. The performance of the hybrid associative memory with translation is compared to four traditional neural networks, a support vector machine and a logistic regression model in terms of their prediction capabilities. The experimental results over nine real-life data sets show that the associative memory here proposed constitutes an appropriate solution for bankruptcy and credit risk prediction, performing significantly better than the rest of models under class imbalance and data overlapping conditions in terms of the true positive rate and the geometric mean of true positive and true negative rates.This work has partially been supported by the Mexican CONACYT through the Postdoctoral Fellowship Program [232167], the Spanish Ministry of Economy [TIN2013-46522-P], the Generalitat Valenciana [PROMETEOII/2014/062] and the Mexican PRODEP [DSA/103.5/15/7004]. We would like to thank the Reviewers for their valuable comments and suggestions, which have helped to improve the quality of this paper substantially
    • …
    corecore