9 research outputs found

    Pattern Recognition Using Associative Memories

    Get PDF
    The human brain is extremely effective at performing pattern recognition, even in the presence of noisy or distorted inputs. Artificial neural networks attempt to imitate the structure of the brain, often with a view to mimicking its success. The binary correlation matrix memory (CMM) is a particular type of neural network that is capable of learning and recalling associations extremely quickly, as well as displaying a high storage capacity and having the ability to generalise from patterns already learned. CMMs have been used as a major component of larger architectures designed to solve a wide range of problems, such as rule chaining, character recognition, or more general pattern recognition. It is clear that the memory requirement of the CMMs will thus have a significant impact on the scalability of such architectures. A domain specific language for binary CMMs is developed, alongside an implementation that uses an efficient storage mechanism which allows memory usage to scale linearly with the number of associations stored. An architecture for rule chaining is then examined in detail, showing that the problem of scalability is indeed settled before identifying and resolving a number of important limitations to its capabilities. Finally an architecture for pattern recognition is investigated, and a memory efficient method to incorporate general invariance into this architecture is presented—this is specifically tested with scale invariance, although the mechanism can be used with other types of invariance such as skew or rotation

    ‘Quantum’ Parallel computation with neural networks

    Get PDF
    Correlation matrix memories have been successfully applied to many domains. This work implements a production system put forward in [Austin, 2003], to demonstrate its viability as an efficient rule-chaining process. Background information on rule-chaining and CMMs is given, followed by a review of the proposed production system. Throughout the iterative development process, experimentation is performed in order to investigate the effects of changing the properties of vectors used in this system. The results show that generating vectors using the algorithm proposed in [Baum, 1988] with a weight close to log2 of the vector length provides the highest storage capacity. The simple system implemented in this work performs rule-chaining effectively. This leads to the conclusion that the proposed production system is viable, and that this area warrants further work

    Full Implementation of an Estimation of Distribution Algorithm on a GPU

    Get PDF
    We submit an implementation of an Estimation of Distribution Algorithm – specifically a variant of the Bayesian Optimisation Algorithm (BOA) – using GPGPU. Every aspect of the algorithm is executed on the device, and it makes effective of use multiple GPU devices in a single machine. As for other EDAs, our implementation is generic in that it may be applied to any problem for which solutions may be represented as binary strings. For the purpose of this paper, we apply it to a particular problem known to be difficult for metaheuristic algorithms due to high interdependency between variables: finding the lowest energy state of an Ising Spin Glass. We show that our GPU implementation demonstrates a speedup in excess of 80x compared with an equivalent CPU implementation. To our knowledge, this is the first EDA to be implemented fully on the GPU

    Hyper-quicksort: energy efficient sorting via the Templar framework for Template Method Hyper-heuristics

    Get PDF
    Scalability remains an issue for program synthesis: - We don’t yet know how to generate sizeable algorithms from scratch. - Generative approaches such as GP still work best at the scale of expressions (though some recent promising results). - Formal approaches require a strong mathematical background. - ... but human ingenuity already provides a vast repertoire of specialized algorithms, usually with known asymptotic behaviour. Given these limitations, how can we best use generative hyper-heuristics to improve upon human-designed algorithms

    A Rule Chaining Architecture Using a Correlation Matrix Memory

    Get PDF
    This paper describes an architecture based on superimposed distributed representations and distributed associative memories which is capable of performing rule chaining. The use of a distributed representation allows the system to utilise memory efficiently, and the use of superposition reduces the time complexity of a tree search to O(d), where d is the depth of the tree. Our experimental results show that the architecture is capable of rule chaining effectively, but that further investigation is needed to address capacity considerations

    Embedded Dynamic Improvement

    Get PDF
    We discuss the useful role that can be played by a subtype of improvement programming, which we term 'Embedded Dynamic Improvement'. In this approach, developer-specified variation points define the scope of improvement. A search framework is embedded at these variation points, facilitating the creation of adaptive software that can respond online to changes in its execution environment

    Extending the Associative Rule Chaining Architecture for Multiple Arity Rules

    Get PDF
    The Associative Rule Chaining Architecture uses distributed associative memories and superimposed distributed representations in order to perform rule chaining efficiently [Austin et al., 2012]. Previous work has focused on rules with only a single antecedent, in this work we extend the architecture to work with multiple-arity rules and show that it continues to operate effectively

    Improving the associative rule chaining architecture

    Get PDF
    This paper describes improvements to the rule chaining architecture presented in [1]. The architecture uses distributed associative memories to allow the system to utilise memory eciently, and superimposed distributed representations in order to reduce the time complexity of a tree search to O(d), where d is the depth of the tree. This new work reduces the memory required by the architecture, and can also further reduce the time complexity

    ENAMeL : a language for binary correlation matrix memories : reducing the memory constraints of matrix memories

    Get PDF
    Despite their relative simplicity, Correlation Matrix Memories (CMMs) are an active area of research, as they are able to be integrated into more complex architectures such as the Associative Rule Chaining Architecture (ARCA) [1]. In this architecture, CMMs are used effectively in order to reduce the time complexity of a tree search from O(bd) to O(d)—where b is the branching factor and d is the depth of the tree. This paper introduces the Extended Neural Associative Memory Language (ENAMeL)—a domain specific language developed to ease development of applications using correlation matrix memories (CMMs). We discuss various considerations required while developing the language, and techniques used to reduce the memory requirements of CMM-based applications. Finally we show that the memory requirements of ARCA when using the ENAMeL interpreter compare favourably to our original results [1] run in MATLAB
    corecore