48 research outputs found

    Encoding techniques for complex information structures in connectionist systems

    Get PDF
    Two general information encoding techniques called relative position encoding and pattern similarity association are presented. They are claimed to be a convenient basis for the connectionist implementation of complex, short term information processing of the sort needed in common sense reasoning, semantic/pragmatic interpretation of natural language utterances, and other types of high level cognitive processing. The relationships of the techniques to other connectionist information-structuring methods, and also to methods used in computers, are discussed in detail. The rich inter-relationships of these other connectionist and computer methods are also clarified. The particular, simple forms are discussed that the relative position encoding and pattern similarity association techniques take in the author's own connectionist system, called Conposit, in order to clarify some issues and to provide evidence that the techniques are indeed useful in practice

    The Philosophy and Neuroscience Movement

    Get PDF
    A movement dedicated to applying neuroscience to traditional philosophical problems and using philosophical methods to illuminate issues in neuroscience began about twenty-five years ago. Results in neuroscience have affected how we see traditional areas of philosophical concern such as perception, belief-formation, and consciousness. There is an interesting interaction between some of the distinctive features of neuroscience and important general issues in the philosophy of science. And recent neuroscience has thrown up a few conceptual issues that philosophers are perhaps best trained to deal with. After sketching the history of the movement, we explore the relationships between neuroscience and philosophy and introduce some of the specific issues that have arise

    An evaluation of standard retrieval algorithms and a binary neural approach

    Get PDF
    In this paper we evaluate a selection of data retrieval algorithms for storage efficiency, retrieval speed and partial matching capabilities using a large Information Retrieval dataset. We evaluate standard data structures, for example inverted file lists and hash tables, but also a novel binary neural network that incorporates: single-epoch training, superimposed coding and associative matching in a binary matrix data structure. We identify the strengths and weaknesses of the approaches. From our evaluation, the novel neural network approach is superior with respect to training speed and partial match retrieval time. From the results, we make recommendations for the appropriate usage of the novel neural approach. (C) 2001 Elsevier Science Ltd. All rights reserved

    Visuospatial Sequence Learning without Seeing

    Get PDF
    Background: The ability to detect and integrate associations between unrelated items that are close in space and time is a key feature of human learning and memory. Learning sequential associations between non-adjacent visual stimuli (higher-order visuospatial dependencies) can occur either with or without awareness (explicit vs. implicit learning) of the products of learning. Existing behavioural and neurocognitive studies of explicit and implicit sequence learning, however, are based on conscious access to the sequence of target locations and, typically, on conditions where the locations for orienting, or motor, responses coincide with the locations of the target sequence. Methodology/Principal findings: Dichoptic stimuli were presented on a novel sequence learning task using a mirror stereoscope to mask the eye-of-origin of visual input from conscious awareness. We demonstrate that conscious access to the sequence of target locations and responses that coincide with structure of the target sequence are dispensable features when learning higher-order visuospatial sequence on a recognition test, even though the trained and untrained recognition sequences were identical when viewed at a conscious binocular level, and differed only at the level of the masked sequential associations. Conclusions/significance: These results demonstrate that unconscious processing can support perceptual learning of higher-order sequential associations through interocular integration of retinotopic-based codes stemming from monocular eye-of-origin information. Furthermore, unlike other forms of perceptual associative learning, visuospatial attention did not need to be directed to the locations of the target sequence. More generally, the results pose a challenge to neural models of learning to account for a previously unknown capacity of the human visual system to support the detection, learning and recognition of higher-order sequential associations where observers are unable to see the target sequence or perform responses that coincide with the structure of the target sequence

    HeiDI: A model for Pavlovian learning and performance with reciprocal associations

    Get PDF
    Associative treatments of how Pavlovian conditioning affects conditioned behavior are rudimentary: A simple ordinal mapping is held to exist between the strength of an association (V) between a conditioned stimulus (CS) and an unconditioned stimulus (US; i.e., VCS-US) and conditioned behavior in a given experimental preparation. The inadequacy of this simplification is highlighted by recent studies that have taken multiple measures of conditioned behavior: Different measures of conditioned behavior provide the basis for drawing opposite conclusions about VCS-US across individual animals. Here, we develop a simple model involving reciprocal associations between the CS and US (VCS-US and VUS-CS) that simulates these qualitative individual differences in conditioned behavior. The new model, HeiDI (How excitation and inhibition Determine Ideo-motion), enables a broad range of phenomena to be accommodated, which are either beyond the scope of extant models or require them to appeal to additional (learning) processes. It also provides an impetus for new lines of inquiry and generates novel predictions

    Towards Lifelong Reasoning with Sparse and Compressive Memory Systems

    Get PDF
    Humans have a remarkable ability to remember information over long time horizons. When reading a book, we build up a compressed representation of the past narrative, such as the characters and events that have built up the story so far. We can do this even if they are separated by thousands of words from the current text, or long stretches of time between readings. During our life, we build up and retain memories that tell us where we live, what we have experienced, and who we are. Adding memory to artificial neural networks has been transformative in machine learning, allowing models to extract structure from temporal data, and more accurately model the future. However the capacity for long-range reasoning in current memory-augmented neural networks is considerably limited, in comparison to humans, despite the access to powerful modern computers. This thesis explores two prominent approaches towards scaling artificial memories to lifelong capacity: sparse access and compressive memory structures. With sparse access, the inspection, retrieval, and updating of only a very small subset of pertinent memory is considered. It is found that sparse memory access is beneficial for learning, allowing for improved data-efficiency and improved generalisation. From a computational perspective - sparsity allows scaling to memories with millions of entities on a simple CPU-based machine. It is shown that memory systems that compress the past to a smaller set of representations reduce redundancy and can speed up the learning of rare classes and improve upon classical data-structures in database systems. Compressive memory architectures are also devised for sequence prediction tasks and are observed to significantly increase the state-of-the-art in modelling natural language

    Author index—Volumes 1–89

    Get PDF
    corecore