376 research outputs found

    Object Action Complexes as an Interface for Planning and Robot Control

    Get PDF
    Abstract — Much prior work in integrating high-level artificial intelligence planning technology with low-level robotic control has foundered on the significant representational differences between these two areas of research. We discuss a proposed solution to this representational discontinuity in the form of object-action complexes (OACs). The pairing of actions and objects in a single interface representation captures the needs of both reasoning levels, and will enable machine learning of high-level action representations from low-level control representations. I. Introduction and Background The different representations that are effective for continuous control of robotic systems and the discrete symbolic AI presents a significant challenge for integrating AI planning research and robotics. These areas of research should be abl

    Towards Artificial Language Learning in a Potts Attractor Network

    Get PDF
    It remains a mystery how children acquire natural languages; languages far beyond the few symbols that a young chimp struggles to learn, and with complex rules that incomparably surpass the repetitive structure of bird songs. How should one explain the emergence of such a capacity from the basic elements of the nervous system, namely neuronal networks? To understand the brain mechanisms underlying the language phenomenon, specifically sentence construction, different approaches have been attempted to implement an artificial neural network that encodes words and constructs sentences (see e.g. (Hummel, J.E. and Holyoak, 1997; Huyck, 2009; Velde and de Kamps, 2006; Stewart and Eliasmith, 2009)). These attempts differ on how the sentence constituents (parts) are represented\u2014either individually and locally, or in a distributed fashion\u2014and on how these constituents are bound together. In LISA (Hummel, J.E. and Holyoak, 1997), each sentence constituent (either a word, a phrase, or even a proposition) is represented individually by a unit\u2014intended to be a population of neurons (Hummel and Holyoak, 2003)\u2014and relevant constituents synchronously get activated in the construction of a sentence (or the inference of a proposition). Considering the productivity of the language\u2014the ability of humans to create many possible sentences out of a limited vocabulary\u2014this representation results in an exponential growth in the number of units needed for structure representation. In order to avoid this problem, Neural Blackboard Architectures (Velde and de Kamps, 2006) were proposed as systems endowed with dynamic bindings between assemblies of words, roles (e.g. theme or agent), and word categories (e.g. nouns or verbs). A neural blackboard architecture resembles a switchboard (a blackboard) that wires sentence constituents together via circuits, using highly complex and meticulously (unrealistic) organized connections. As opposed to localized approaches, in a Vector Symbolic Architecture (Gayler, 2003; Plate, 1991), words are represented in a fully distributed fashion on a vector. The words are bound (and merged) together by algebraic operations\u2014e.g. tensor products (Smolensky, 1990) or circular convolution (Plate, 1991)\u2014in the vector space. In order to give a biological account, some steps have been attempted towards the neural implementation of such operations (Stewart and Eliasmith, 2009). Another distributed approach was toward implementing a simple recurrent neural network that predicts the next word in a sentence (Elman, 1991). Apart from the limited language size that the network could deal with (Elman, 1993), this system lacked an explicit representation of syntactic constituents, thus resulting in a lack of grammatical knowledge in the network (Borensztajn, 2011; Velde and de Kamps, 2006). However, despite all these attempts, there remains the lack of a neural model that addresses the challenges of language size, semantic and syntactic distinction, word binding, and word implementation in a neurally plausible manner. We are exploring a novel approach to address these challenges, that involves first constructing an artificial language of intermediate complexity and then implementing a neural network, as a simplified cortical model of sentence production, which stores the vocabulary and the grammar of the artificial language in a neurally inspired manner on two components: one semantic and one syntactic. As the training language of the network, we have constructed BLISS (Pirmoradian and Treves, 2011), a scaled-down synthetic language of intermediate complexity, with about 150 words, 40 production rules, and a definition of semantics that is reduced to statistical dependence between words. In Chapter 2, we will explain the details of the implementation of BLISS. As a sentence production model, we have implemented a Potts attractor neural network, whose units hypothetically represent patches of cortex. The choice of the Potts network, for sentence production, has been mainly motivated by the latching dynamics it exhibits (Kropff and Treves, 2006); that is, an ability to spontaneously hop, or latch, across memory patterns, which have been stored as dynamical attractors, thus producing a long or even infinite sequence of patterns, at least in some regimes (Russo and Treves, 2012). The goal is to train the Potts network with a corpus of sentences in BLISS. This involves setting first the structure of the network, then the generating algorithm for word representations, and finally the protocol to train the network with the specific transitions present in the BLISS corpus, using both auto- and hetero-associative learning rules. In Chapter 3, we will explain the details of the procedure we have adapted for word representation in the network. The last step involves utilizing the spontaneous latching dynamics exhibited by the Potts network, the word representation we have developed, and crucially hetero-associative weights favouring specific transitions, to generate, with a suitable associative training procedure, sentences \u201duttered\u201d by the network. This last stage of spontaneous sentence production by the network has been explained in Chapter 4

    A History of Spike-Timing-Dependent Plasticity

    Get PDF
    How learning and memory is achieved in the brain is a central question in neuroscience. Key to today’s research into information storage in the brain is the concept of synaptic plasticity, a notion that has been heavily influenced by Hebb's (1949) postulate. Hebb conjectured that repeatedly and persistently co-active cells should increase connective strength among populations of interconnected neurons as a means of storing a memory trace, also known as an engram. Hebb certainly was not the first to make such a conjecture, as we show in this history. Nevertheless, literally thousands of studies into the classical frequency-dependent paradigm of cellular learning rules were directly inspired by the Hebbian postulate. But in more recent years, a novel concept in cellular learning has emerged, where temporal order instead of frequency is emphasized. This new learning paradigm – known as spike-timing-dependent plasticity (STDP) – has rapidly gained tremendous interest, perhaps because of its combination of elegant simplicity, biological plausibility, and computational power. But what are the roots of today’s STDP concept? Here, we discuss several centuries of diverse thinking, beginning with philosophers such as Aristotle, Locke, and Ribot, traversing, e.g., Lugaro’s plasticità and Rosenblatt’s perceptron, and culminating with the discovery of STDP. We highlight interactions between theoretical and experimental fields, showing how discoveries sometimes occurred in parallel, seemingly without much knowledge of the other field, and sometimes via concrete back-and-forth communication. We point out where the future directions may lie, which includes interneuron STDP, the functional impact of STDP, its mechanisms and its neuromodulatory regulation, and the linking of STDP to the developmental formation and continuous plasticity of neuronal networks

    Integer Sparse Distributed Memory and Modular Composite Representation

    Get PDF
    Challenging AI applications, such as cognitive architectures, natural language understanding, and visual object recognition share some basic operations including pattern recognition, sequence learning, clustering, and association of related data. Both the representations used and the structure of a system significantly influence which tasks and problems are most readily supported. A memory model and a representation that facilitate these basic tasks would greatly improve the performance of these challenging AI applications.Sparse Distributed Memory (SDM), based on large binary vectors, has several desirable properties: auto-associativity, content addressability, distributed storage, robustness over noisy inputs that would facilitate the implementation of challenging AI applications. Here I introduce two variations on the original SDM, the Extended SDM and the Integer SDM, that significantly improve these desirable properties, as well as a new form of reduced description representation named MCR.Extended SDM, which uses word vectors of larger size than address vectors, enhances its hetero-associativity, improving the storage of sequences of vectors, as well as of other data structures. A novel sequence learning mechanism is introduced, and several experiments demonstrate the capacity and sequence learning capability of this memory.Integer SDM uses modular integer vectors rather than binary vectors, improving the representation capabilities of the memory and its noise robustness. Several experiments show its capacity and noise robustness. Theoretical analyses of its capacity and fidelity are also presented.A reduced description represents a whole hierarchy using a single high-dimensional vector, which can recover individual items and directly be used for complex calculations and procedures, such as making analogies. Furthermore, the hierarchy can be reconstructed from the single vector. Modular Composite Representation (MCR), a new reduced description model for the representation used in challenging AI applications, provides an attractive tradeoff between expressiveness and simplicity of operations. A theoretical analysis of its noise robustness, several experiments, and comparisons with similar models are presented.My implementations of these memories include an object oriented version using a RAM cache, a version for distributed and multi-threading execution, and a GPU version for fast vector processing

    Towards neuro-inspired symbolic models of cognition: linking neural dynamics to behaviors through asynchronous communications

    Get PDF
    A computational architecture modeling the relation between perception and action is proposed. Basic brain processes representing synaptic plasticity are first abstracted through asynchronous communication protocols and implemented as virtual microcircuits. These are used in turn to build mesoscale circuits embodying parallel cognitive processes. Encoding these circuits into symbolic expressions gives finally rise to neuro-inspired programs that are compiled into pseudo-code to be interpreted by a virtual machine. Quantitative evaluation measures are given by the modification of synapse weights over time. This approach is illustrated by models of simple forms of behaviors exhibiting cognition up to the third level of animal awareness. As a potential benefit, symbolic models of emergent psychological mechanisms could lead to the discovery of the learning processes involved in the development of cognition. The executable specifications of an experimental platform allowing for the reproduction of simulated experiments are given in “Appendix”

    Investigation of Different Video Compression Schemes Using Neural Networks

    Get PDF
    Image/Video compression has great significance in the communication of motion pictures and still images. The need for compression has resulted in the development of various techniques including transform coding, vector quantization and neural networks. this thesis neural network based methods are investigated to achieve good compression ratios while maintaining the image quality. Parts of this investigation include motion detection, and weight retraining. An adaptive technique is employed to improve the video frame quality for a given compression ratio by frequently updating the weights obtained from training. More specifically, weight retraining is performed only when the error exceeds a given threshold value. Image quality is measured objectively, using the peak signal-to-noise ratio versus performance measure. Results show the improved performance of the proposed architecture compared to existing approaches. The proposed method is implemented in MATLAB and the results obtained such as compression ratio versus signalto- noise ratio are presented

    Precis of neuroconstructivism: how the brain constructs cognition

    Get PDF
    Neuroconstructivism: How the Brain Constructs Cognition proposes a unifying framework for the study of cognitive development that brings together (1) constructivism (which views development as the progressive elaboration of increasingly complex structures), (2) cognitive neuroscience (which aims to understand the neural mechanisms underlying behavior), and (3) computational modeling (which proposes formal and explicit specifications of information processing). The guiding principle of our approach is context dependence, within and (in contrast to Marr [1982]) between levels of organization. We propose that three mechanisms guide the emergence of representations: competition, cooperation, and chronotopy; which themselves allow for two central processes: proactivity and progressive specialization. We suggest that the main outcome of development is partial representations, distributed across distinct functional circuits. This framework is derived by examining development at the level of single neurons, brain systems, and whole organisms. We use the terms encellment, embrainment, and embodiment to describe the higher-level contextual influences that act at each of these levels of organization. To illustrate these mechanisms in operation we provide case studies in early visual perception, infant habituation, phonological development, and object representations in infancy. Three further case studies are concerned with interactions between levels of explanation: social development, atypical development and within that, developmental dyslexia. We conclude that cognitive development arises from a dynamic, contextual change in embodied neural structures leading to partial representations across multiple brain regions and timescales, in response to proactively specified physical and social environment
    corecore