4,303 research outputs found

    Combinatorial representations of token sequences

    Get PDF
    This paper presents new representations of token sequences, with and without associated quantities, in Euclidean space. The representations are free of assumptions about the nature of the sequences or the processes that generate them. Algorithms and applications from the domains of structured interviews and life histories are discussed. © Springer Science+Business Media Inc. 2005

    Training neural networks to encode symbols enables combinatorial generalization

    Get PDF
    Combinatorial generalization - the ability to understand and produce novel combinations of already familiar elements - is considered to be a core capacity of the human mind and a major challenge to neural network models. A significant body of research suggests that conventional neural networks can't solve this problem unless they are endowed with mechanisms specifically engineered for the purpose of representing symbols. In this paper we introduce a novel way of representing symbolic structures in connectionist terms - the vectors approach to representing symbols (VARS), which allows training standard neural architectures to encode symbolic knowledge explicitly at their output layers. In two simulations, we show that neural networks not only can learn to produce VARS representations, but in doing so they achieve combinatorial generalization in their symbolic and non-symbolic output. This adds to other recent work that has shown improved combinatorial generalization under specific training conditions, and raises the question of whether specific mechanisms or training routines are needed to support symbolic processing

    Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience

    Full text link
    Physical symbol systems are needed for open-ended cognition. A good way to understand physical symbol systems is by comparison of thought to chemistry. Both have systematicity, productivity and compositionality. The state of the art in cognitive architectures for open-ended cognition is critically assessed. I conclude that a cognitive architecture that evolves symbol structures in the brain is a promising candidate to explain open-ended cognition. Part 2 of the paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living Machines 2013 Natural History Museum, Londo

    Monomiality principle, Sheffer-type polynomials and the normal ordering problem

    Get PDF
    We solve the boson normal ordering problem for (q(a†)a+v(a†))n(q(a^\dag)a+v(a^\dag))^n with arbitrary functions q(x)q(x) and v(x)v(x) and integer nn, where aa and a†a^\dag are boson annihilation and creation operators, satisfying [a,a†]=1[a,a^\dag]=1. This consequently provides the solution for the exponential eλ(q(a†)a+v(a†))e^{\lambda(q(a^\dag)a+v(a^\dag))} generalizing the shift operator. In the course of these considerations we define and explore the monomiality principle and find its representations. We exploit the properties of Sheffer-type polynomials which constitute the inherent structure of this problem. In the end we give some examples illustrating the utility of the method and point out the relation to combinatorial structures.Comment: Presented at the 8'th International School of Theoretical Physics "Symmetry and Structural Properties of Condensed Matter " (SSPCM 2005), Myczkowce, Poland. 13 pages, 31 reference
    • …
    corecore