465 research outputs found

    Associative neural networks: properties, learning, and applications.

    Get PDF
    by Chi-sing Leung.Thesis (Ph.D.)--Chinese University of Hong Kong, 1994.Includes bibliographical references (leaves 236-244).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background of Associative Neural Networks --- p.1Chapter 1.2 --- A Distributed Encoding Model: Bidirectional Associative Memory --- p.3Chapter 1.3 --- A Direct Encoding Model: Kohonen Map --- p.6Chapter 1.4 --- Scope and Organization --- p.9Chapter 1.5 --- Summary of Publications --- p.13Chapter I --- Bidirectional Associative Memory: Statistical Proper- ties and Learning --- p.17Chapter 2 --- Introduction to Bidirectional Associative Memory --- p.18Chapter 2.1 --- Bidirectional Associative Memory and its Encoding Method --- p.18Chapter 2.2 --- Recall Process of BAM --- p.20Chapter 2.3 --- Stability of BAM --- p.22Chapter 2.4 --- Memory Capacity of BAM --- p.24Chapter 2.5 --- Error Correction Capability of BAM --- p.28Chapter 2.6 --- Chapter Summary --- p.29Chapter 3 --- Memory Capacity and Statistical Dynamics of First Order BAM --- p.31Chapter 3.1 --- Introduction --- p.31Chapter 3.2 --- Existence of Energy Barrier --- p.34Chapter 3.3 --- Memory Capacity from Energy Barrier --- p.44Chapter 3.4 --- Confidence Dynamics --- p.49Chapter 3.5 --- Numerical Results from the Dynamics --- p.63Chapter 3.6 --- Chapter Summary --- p.68Chapter 4 --- Stability and Statistical Dynamics of Second order BAM --- p.70Chapter 4.1 --- Introduction --- p.70Chapter 4.2 --- Second order BAM and its Stability --- p.71Chapter 4.3 --- Confidence Dynamics of Second Order BAM --- p.75Chapter 4.4 --- Numerical Results --- p.82Chapter 4.5 --- Extension to higher order BAM --- p.90Chapter 4.6 --- Verification of the conditions of Newman's Lemma --- p.94Chapter 4.7 --- Chapter Summary --- p.95Chapter 5 --- Enhancement of BAM --- p.97Chapter 5.1 --- Background --- p.97Chapter 5.2 --- Review on Modifications of BAM --- p.101Chapter 5.2.1 --- Change of the encoding method --- p.101Chapter 5.2.2 --- Change of the topology --- p.105Chapter 5.3 --- Householder Encoding Algorithm --- p.107Chapter 5.3.1 --- Construction from Householder Transforms --- p.107Chapter 5.3.2 --- Construction from iterative method --- p.109Chapter 5.3.3 --- Remarks on HCA --- p.111Chapter 5.4 --- Enhanced Householder Encoding Algorithm --- p.112Chapter 5.4.1 --- Construction of EHCA --- p.112Chapter 5.4.2 --- Remarks on EHCA --- p.114Chapter 5.5 --- Bidirectional Learning --- p.115Chapter 5.5.1 --- Construction of BL --- p.115Chapter 5.5.2 --- The Convergence of BL and the memory capacity of BL --- p.116Chapter 5.5.3 --- Remarks on BL --- p.120Chapter 5.6 --- Adaptive Ho-Kashyap Bidirectional Learning --- p.121Chapter 5.6.1 --- Construction of AHKBL --- p.121Chapter 5.6.2 --- Convergent Conditions for AHKBL --- p.124Chapter 5.6.3 --- Remarks on AHKBL --- p.125Chapter 5.7 --- Computer Simulations --- p.126Chapter 5.7.1 --- Memory Capacity --- p.126Chapter 5.7.2 --- Error Correction Capability --- p.130Chapter 5.7.3 --- Learning Speed --- p.157Chapter 5.8 --- Chapter Summary --- p.158Chapter 6 --- BAM under Forgetting Learning --- p.160Chapter 6.1 --- Introduction --- p.160Chapter 6.2 --- Properties of Forgetting Learning --- p.162Chapter 6.3 --- Computer Simulations --- p.168Chapter 6.4 --- Chapter Summary --- p.168Chapter II --- Kohonen Map: Applications in Data compression and Communications --- p.170Chapter 7 --- Introduction to Vector Quantization and Kohonen Map --- p.171Chapter 7.1 --- Background on Vector quantization --- p.171Chapter 7.2 --- Introduction to LBG algorithm --- p.173Chapter 7.3 --- Introduction to Kohonen Map --- p.174Chapter 7.4 --- Chapter Summary --- p.179Chapter 8 --- Applications of Kohonen Map in Data Compression and Communi- cations --- p.181Chapter 8.1 --- Use Kohonen Map to design Trellis Coded Vector Quantizer --- p.182Chapter 8.1.1 --- Trellis Coded Vector Quantizer --- p.182Chapter 8.1.2 --- Trellis Coded Kohonen Map --- p.188Chapter 8.1.3 --- Computer Simulations --- p.191Chapter 8.2 --- Kohonen MapiCombined Vector Quantization and Modulation --- p.195Chapter 8.2.1 --- Impulsive Noise in the received data --- p.195Chapter 8.2.2 --- Combined Kohonen Map and Modulation --- p.198Chapter 8.2.3 --- Computer Simulations --- p.200Chapter 8.3 --- Error Control Scheme for the Transmission of Vector Quantized Data --- p.213Chapter 8.3.1 --- Motivation and Background --- p.214Chapter 8.3.2 --- Trellis Coded Modulation --- p.216Chapter 8.3.3 --- "Combined Vector Quantization, Error Control, and Modulation" --- p.220Chapter 8.3.4 --- Computer Simulations --- p.223Chapter 8.4 --- Chapter Summary --- p.226Chapter 9 --- Conclusion --- p.232Bibliography --- p.23

    Nonlinear Systems

    Get PDF
    Open Mathematics is a challenging notion for theoretical modeling, technical analysis, and numerical simulation in physics and mathematics, as well as in many other fields, as highly correlated nonlinear phenomena, evolving over a large range of time scales and length scales, control the underlying systems and processes in their spatiotemporal evolution. Indeed, available data, be they physical, biological, or financial, and technologically complex systems and stochastic systems, such as mechanical or electronic devices, can be managed from the same conceptual approach, both analytically and through computer simulation, using effective nonlinear dynamics methods. The aim of this Special Issue is to highlight papers that show the dynamics, control, optimization and applications of nonlinear systems. This has recently become an increasingly popular subject, with impressive growth concerning applications in engineering, economics, biology, and medicine, and can be considered a veritable contribution to the literature. Original papers relating to the objective presented above are especially welcome subjects. Potential topics include, but are not limited to: Stability analysis of discrete and continuous dynamical systems; Nonlinear dynamics in biological complex systems; Stability and stabilization of stochastic systems; Mathematical models in statistics and probability; Synchronization of oscillators and chaotic systems; Optimization methods of complex systems; Reliability modeling and system optimization; Computation and control over networked systems

    Formal concept matching and reinforcement learning in adaptive information retrieval

    Get PDF
    The superiority of the human brain in information retrieval (IR) tasks seems to come firstly from its ability to read and understand the concepts, ideas or meanings central to documents, in order to reason out the usefulness of documents to information needs, and secondly from its ability to learn from experience and be adaptive to the environment. In this work we attempt to incorporate these properties into the development of an IR model to improve document retrieval. We investigate the applicability of concept lattices, which are based on the theory of Formal Concept Analysis (FCA), to the representation of documents. This allows the use of more elegant representation units, as opposed to keywords, in order to better capture concepts/ideas expressed in natural language text. We also investigate the use of a reinforcement leaming strategy to learn and improve document representations, based on the information present in query statements and user relevance feedback. Features or concepts of each document/query, formulated using FCA, are weighted separately with respect to the documents they are in, and organised into separate concept lattices according to a subsumption relation. Furthen-nore, each concept lattice is encoded in a two-layer neural network structure known as a Bidirectional Associative Memory (BAM), for efficient manipulation of the concepts in the lattice representation. This avoids implementation drawbacks faced by other FCA-based approaches. Retrieval of a document for an information need is based on concept matching between concept lattice representations of a document and a query. The learning strategy works by making the similarity of relevant documents stronger and non-relevant documents weaker for each query, depending on the relevance judgements of the users on retrieved documents. Our approach is radically different to existing FCA-based approaches in the following respects: concept formulation; weight assignment to object-attribute pairs; the representation of each document in a separate concept lattice; and encoding concept lattices in BAM structures. Furthermore, in contrast to the traditional relevance feedback mechanism, our learning strategy makes use of relevance feedback information to enhance document representations, thus making the document representations dynamic and adaptive to the user interactions. The results obtained on the CISI, CACM and ASLIB Cranfield collections are presented and compared with published results. In particular, the performance of the system is shown to improve significantly as the system learns from experience.The School of Computing, University of Plymouth, UK

    A review of learning in biologically plausible spiking neural networks

    Get PDF
    Artificial neural networks have been used as a powerful processing tool in various areas such as pattern recognition, control, robotics, and bioinformatics. Their wide applicability has encouraged researchers to improve artificial neural networks by investigating the biological brain. Neurological research has significantly progressed in recent years and continues to reveal new characteristics of biological neurons. New technologies can now capture temporal changes in the internal activity of the brain in more detail and help clarify the relationship between brain activity and the perception of a given stimulus. This new knowledge has led to a new type of artificial neural network, the Spiking Neural Network (SNN), that draws more faithfully on biological properties to provide higher processing abilities. A review of recent developments in learning of spiking neurons is presented in this paper. First the biological background of SNN learning algorithms is reviewed. The important elements of a learning algorithm such as the neuron model, synaptic plasticity, information encoding and SNN topologies are then presented. Then, a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented. Additionally, deep spiking neural networks are reviewed, and challenges and opportunities in the SNN field are discussed

    Scalable large margin pairwise learning algorithms

    Get PDF
    2019 Summer.Includes bibliographical references.Classification is a major task in machine learning and data mining applications. Many of these applications involve building a classification model using a large volume of imbalanced data. In such an imbalanced learning scenario, the area under the ROC curve (AUC) has proven to be a reliable performance measure to evaluate a classifier. Therefore, it is desirable to develop scalable learning algorithms that maximize the AUC metric directly. The kernelized AUC maximization machines have established a superior generalization ability compared to linear AUC machines. However, the computational cost of the kernelized machines hinders their scalability. To address this problem, we propose a large-scale nonlinear AUC maximization algorithm that learns a batch linear classifier on approximate feature space computed via the k-means Nyström method. The proposed algorithm is shown empirically to achieve comparable AUC classification performance or even better than the kernel AUC machines, while its training time is faster by several orders of magnitude. However, the computational complexity of the linear batch model compromises its scalability when training sizable datasets. Hence, we develop a second-order online AUC maximization algorithms based on a confidence-weighted model. The proposed algorithms exploit the second-order information to improve the convergence rate and implement a fixed-size buffer to address the multivariate nature of the AUC objective function. We also extend our online linear algorithms to consider an approximate feature map constructed using random Fourier features in an online setting. The results show that our proposed algorithms outperform or are at least comparable to the competing online AUC maximization methods. Despite their scalability, we notice that online first and second-order AUC maximization methods are prone to suboptimal convergence. This can be attributed to the limitation of the hypothesis space. A potential improvement can be attained by learning stochastic online variants. However, the vanilla stochastic methods also suffer from slow convergence because of the high variance introduced by the stochastic process. We address the problem of slow convergence by developing a fast convergence stochastic AUC maximization algorithm. The proposed stochastic algorithm is accelerated using a unique combination of scheduled regularization update and scheduled averaging. The experimental results show that the proposed algorithm performs better than the state-of-the-art online and stochastic AUC maximization methods in terms of AUC classification accuracy. Moreover, we develop a proximal variant of our accelerated stochastic AUC maximization algorithm. The proposed method applies the proximal operator to the hinge loss function. Therefore, it evaluates the gradient of the loss function at the approximated weight vector. Experiments on several benchmark datasets show that our proximal algorithm converges to the optimal solution faster than the previous AUC maximization algorithms

    Methods and practice of detecting selection in human cancers

    Get PDF
    Cancer development and progression is an evolutionary process, understanding these evolutionary dynamics is important for treatment and diagnosis as how a cancer evolves determines its future prognosis. This thesis focuses on elucidating selective evolutionary pressures in cancers and somatic tissues using population genetics models and cancer genomics data. First a model for the expected diversity in the absence of selection was developed. This neutral model of evolution predicts that under neutrality the frequency of subclonal mutations is expected to follow a power law distribution. Surprisingly more than 30% of cancer across multiple cohorts fitted this model. The next part of the thesis develops models to explore the effects of selection given these should be observable as deviations from the neutral prediction. For this I developed two approaches. The first approach investigated selection at the level of individual samples and showed that a characteristic pattern of clusters of mutations is observed in deep sequencing experiments. Using a mathematical model, information encoded within these clusters can be used to measure the relative fitness of subclones and the time they emerge during tumour evolution. With this I observed strikingly high fitness advantages for subclones of above 20%. The second approach enables measuring recurrent patterns of selection in cohorts of sequenced cancers using dN/dS, the ratio of non-synonymous to synonymous mutations, a method originally developed for molecular species evolution. This approach demonstrates how selection coefficients can be extracted by combining measurements of dN/dS with the size of mutational lineages. With this approach selection coefficients were again observed to be strikingly high. Finally I looked at population dynamics in normal colonic tissue given that many mutations accumulate in physiologically normal tissue. I found that the current view of stem cell dynamics was unable to explain sequencing data from individual colonic crypts. Some new models were proposed that introduce a longer time scale evolution that suppresses the accumulation of mutations which appear consistent with the data

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers
    • …
    corecore