321 research outputs found

    The relaxation method for learning in artificial neural networks

    Get PDF
    A new mathematical approach for deriving learning algorithms for various neural network models including the Hopfield model, Bidirectional Associative Memory, Dynamic Heteroassociative Neural Memory, and Radial Basis Function Networks is presented. The mathematical approach is based on the relaxation method for solving systems of linear inequalities. The newly developed learning algorithms are fast and they guarantee convergence to a solution in a finite number of steps. The new algorithms are highly insensitive to choice of parameters and the initial set of weights. They also exhibit high scalability on binary random patterns. Rigorous mathematical foundations for the new algorithms and their simulation studies are included

    Multilayer optical learning networks

    Get PDF
    A new approach to learning in a multilayer optical neural network based on holographically interconnected nonlinear devices is presented. The proposed network can learn the interconnections that form a distributed representation of a desired pattern transformation operation. The interconnections are formed in an adaptive and self-aligning fashioias volume holographic gratings in photorefractive crystals. Parallel arrays of globally space-integrated inner products diffracted by the interconnecting hologram illuminate arrays of nonlinear Fabry-Perot etalons for fast thresholding of the transformed patterns. A phase conjugated reference wave interferes with a backward propagating error signal to form holographic interference patterns which are time integrated in the volume of a photorefractive crystal to modify slowly and learn the appropriate self-aligning interconnections. This multilayer system performs an approximate implementation of the backpropagation learning procedure in a massively parallel high-speed nonlinear optical network

    Financial distress prediction using the hybrid associative memory with translation

    Get PDF
    This paper presents an alternative technique for financial distress prediction systems. The method is based on a type of neural network, which is called hybrid associative memory with translation. While many different neural network architectures have successfully been used to predict credit risk and corporate failure, the power of associative memories for financial decision-making has not been explored in any depth as yet. The performance of the hybrid associative memory with translation is compared to four traditional neural networks, a support vector machine and a logistic regression model in terms of their prediction capabilities. The experimental results over nine real-life data sets show that the associative memory here proposed constitutes an appropriate solution for bankruptcy and credit risk prediction, performing significantly better than the rest of models under class imbalance and data overlapping conditions in terms of the true positive rate and the geometric mean of true positive and true negative rates.This work has partially been supported by the Mexican CONACYT through the Postdoctoral Fellowship Program [232167], the Spanish Ministry of Economy [TIN2013-46522-P], the Generalitat Valenciana [PROMETEOII/2014/062] and the Mexican PRODEP [DSA/103.5/15/7004]. We would like to thank the Reviewers for their valuable comments and suggestions, which have helped to improve the quality of this paper substantially

    Learning in Artificial Neural Systems

    Get PDF
    This paper presents an overview and analysis of learning in Artificial Neural Systems (ANS's). It begins with a general introduction to neural networks and connectionist approaches to information processing. The basis for learning in ANS's is then described, and compared with classical Machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomized, and where possible, the limitations inherent to specific classes of rules are outlined

    Associative learning on imbalanced environments: An empirical study

    Get PDF
    Associative memories have emerged as a powerful computational neural network model for several pattern classification problems. Like most traditional classifiers, these models assume that the classes share similar prior probabilities. However, in many real-life applications the ratios of prior probabilities between classes are extremely skewed. Although the literature has provided numerous studies that examine the performance degradation of renowned classifiers on different imbalanced scenarios, so far this effect has not been supported by a thorough empirical study in the context of associative memories. In this paper, we fix our attention on the applicability of the associative neural networks to the classification of imbalanced data. The key questions here addressed are whether these models perform better, the same or worse than other popular classifiers, how the level of imbalance affects their performance, and whether distinct resampling strategies produce a different impact on the associative memories. In order to answer these questions and gain further insight into the feasibility and efficiency of the associative memories, a large-scale experimental evaluation with 31 databases, seven classification models and four resampling algorithms is carried out here, along with a non-parametric statistical test to discover any significant differences between each pair of classifiers.This work has partially been supported by the Mexican Science and Technology Council (CONACYT-Mexico) through the Postdoctoral Fellowship Program (232167), the Mexican PRODEP(DSA/103.5/15/7004), the Spanish Ministry of Economy(TIN2013-46522-P) and the Generalitat Valenciana (PROMETEOII/2014/062)

    Theoretical study of information capacity of Hopfield neural network and its application to expert database system

    Get PDF
    The conventional computer systems can solve complex mathematical problems very fast, yet it can\u27t efficiently process high-level intelligent functions of human brain such as pattern recognition, categorization, and associative memory;A neural network is proposed as a computational structure for modeling high-level intelligent functions of human brain. Recently, neural networks have attracted considerable attentions as a novel computational system because of the following expected benefits which are often considered as generic characteristics of human brain: (1) massive parallelism, (2) learning as a means of efficient knowledge acquisition, and (3) robustness arising from distributed information processing;Neural networks are being studied from a different point of view in many disciplines such as psychology, mathematics, statistics, physics, engineering, computer science, neuroscience, biology, and linguistics. Depending on disciplines, neural networks have diverse nomenclature as artificial neural networks, connectionism, PDPs, adaptive systems, adaptive networks, and neurocomputers;We study the neural networks from the computer scientist\u27s point of view. The objectives of this research work are: (1) providing a global picture of the current state of the art by surveying a score of neural networks chronologically and functionally, (2) providing a theoretical justification for well-known empirical results about the information capacity of Hopfield neural network, and (3) providing an experimental logical database system using Hopfield neural network as an inference engine

    Gene selection and disease prediction from gene expression data using a two-stage hetero-associative memory

    Get PDF
    In general, gene expression microarrays consist of a vast number of genes and very few samples, which represents a critical challenge for disease prediction and diagnosis. This paper develops a two-stage algorithm that integrates feature selection and prediction by extending a type of hetero-associative neural networks. In the first level, the algorithm generates the associative memory, whereas the second level picks the most relevant genes.With the purpose of illustrating the applicability and efficiency of the method proposed here, we use four different gene expression microarray databases and compare their classification performance against that of other renowned classifiers built on the whole (original) feature (gene) space. The experimental results show that the two-stage hetero-associative memory is quite competitive with standard classification models regarding the overall accuracy, sensitivity and specificity. In addition, it also produces a significant decrease in computational efforts and an increase in the biological interpretability of microarrays because worthless (irrelevant and/or redundant) genes are discarded

    Associative neural networks: properties, learning, and applications.

    Get PDF
    by Chi-sing Leung.Thesis (Ph.D.)--Chinese University of Hong Kong, 1994.Includes bibliographical references (leaves 236-244).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background of Associative Neural Networks --- p.1Chapter 1.2 --- A Distributed Encoding Model: Bidirectional Associative Memory --- p.3Chapter 1.3 --- A Direct Encoding Model: Kohonen Map --- p.6Chapter 1.4 --- Scope and Organization --- p.9Chapter 1.5 --- Summary of Publications --- p.13Chapter I --- Bidirectional Associative Memory: Statistical Proper- ties and Learning --- p.17Chapter 2 --- Introduction to Bidirectional Associative Memory --- p.18Chapter 2.1 --- Bidirectional Associative Memory and its Encoding Method --- p.18Chapter 2.2 --- Recall Process of BAM --- p.20Chapter 2.3 --- Stability of BAM --- p.22Chapter 2.4 --- Memory Capacity of BAM --- p.24Chapter 2.5 --- Error Correction Capability of BAM --- p.28Chapter 2.6 --- Chapter Summary --- p.29Chapter 3 --- Memory Capacity and Statistical Dynamics of First Order BAM --- p.31Chapter 3.1 --- Introduction --- p.31Chapter 3.2 --- Existence of Energy Barrier --- p.34Chapter 3.3 --- Memory Capacity from Energy Barrier --- p.44Chapter 3.4 --- Confidence Dynamics --- p.49Chapter 3.5 --- Numerical Results from the Dynamics --- p.63Chapter 3.6 --- Chapter Summary --- p.68Chapter 4 --- Stability and Statistical Dynamics of Second order BAM --- p.70Chapter 4.1 --- Introduction --- p.70Chapter 4.2 --- Second order BAM and its Stability --- p.71Chapter 4.3 --- Confidence Dynamics of Second Order BAM --- p.75Chapter 4.4 --- Numerical Results --- p.82Chapter 4.5 --- Extension to higher order BAM --- p.90Chapter 4.6 --- Verification of the conditions of Newman's Lemma --- p.94Chapter 4.7 --- Chapter Summary --- p.95Chapter 5 --- Enhancement of BAM --- p.97Chapter 5.1 --- Background --- p.97Chapter 5.2 --- Review on Modifications of BAM --- p.101Chapter 5.2.1 --- Change of the encoding method --- p.101Chapter 5.2.2 --- Change of the topology --- p.105Chapter 5.3 --- Householder Encoding Algorithm --- p.107Chapter 5.3.1 --- Construction from Householder Transforms --- p.107Chapter 5.3.2 --- Construction from iterative method --- p.109Chapter 5.3.3 --- Remarks on HCA --- p.111Chapter 5.4 --- Enhanced Householder Encoding Algorithm --- p.112Chapter 5.4.1 --- Construction of EHCA --- p.112Chapter 5.4.2 --- Remarks on EHCA --- p.114Chapter 5.5 --- Bidirectional Learning --- p.115Chapter 5.5.1 --- Construction of BL --- p.115Chapter 5.5.2 --- The Convergence of BL and the memory capacity of BL --- p.116Chapter 5.5.3 --- Remarks on BL --- p.120Chapter 5.6 --- Adaptive Ho-Kashyap Bidirectional Learning --- p.121Chapter 5.6.1 --- Construction of AHKBL --- p.121Chapter 5.6.2 --- Convergent Conditions for AHKBL --- p.124Chapter 5.6.3 --- Remarks on AHKBL --- p.125Chapter 5.7 --- Computer Simulations --- p.126Chapter 5.7.1 --- Memory Capacity --- p.126Chapter 5.7.2 --- Error Correction Capability --- p.130Chapter 5.7.3 --- Learning Speed --- p.157Chapter 5.8 --- Chapter Summary --- p.158Chapter 6 --- BAM under Forgetting Learning --- p.160Chapter 6.1 --- Introduction --- p.160Chapter 6.2 --- Properties of Forgetting Learning --- p.162Chapter 6.3 --- Computer Simulations --- p.168Chapter 6.4 --- Chapter Summary --- p.168Chapter II --- Kohonen Map: Applications in Data compression and Communications --- p.170Chapter 7 --- Introduction to Vector Quantization and Kohonen Map --- p.171Chapter 7.1 --- Background on Vector quantization --- p.171Chapter 7.2 --- Introduction to LBG algorithm --- p.173Chapter 7.3 --- Introduction to Kohonen Map --- p.174Chapter 7.4 --- Chapter Summary --- p.179Chapter 8 --- Applications of Kohonen Map in Data Compression and Communi- cations --- p.181Chapter 8.1 --- Use Kohonen Map to design Trellis Coded Vector Quantizer --- p.182Chapter 8.1.1 --- Trellis Coded Vector Quantizer --- p.182Chapter 8.1.2 --- Trellis Coded Kohonen Map --- p.188Chapter 8.1.3 --- Computer Simulations --- p.191Chapter 8.2 --- Kohonen MapiCombined Vector Quantization and Modulation --- p.195Chapter 8.2.1 --- Impulsive Noise in the received data --- p.195Chapter 8.2.2 --- Combined Kohonen Map and Modulation --- p.198Chapter 8.2.3 --- Computer Simulations --- p.200Chapter 8.3 --- Error Control Scheme for the Transmission of Vector Quantized Data --- p.213Chapter 8.3.1 --- Motivation and Background --- p.214Chapter 8.3.2 --- Trellis Coded Modulation --- p.216Chapter 8.3.3 --- "Combined Vector Quantization, Error Control, and Modulation" --- p.220Chapter 8.3.4 --- Computer Simulations --- p.223Chapter 8.4 --- Chapter Summary --- p.226Chapter 9 --- Conclusion --- p.232Bibliography --- p.23
    • …
    corecore