138,136 research outputs found

    Monomiality principle, Sheffer-type polynomials and the normal ordering problem

    Get PDF
    We solve the boson normal ordering problem for (q(a†)a+v(a†))n(q(a^\dag)a+v(a^\dag))^n with arbitrary functions q(x)q(x) and v(x)v(x) and integer nn, where aa and a†a^\dag are boson annihilation and creation operators, satisfying [a,a†]=1[a,a^\dag]=1. This consequently provides the solution for the exponential eλ(q(a†)a+v(a†))e^{\lambda(q(a^\dag)a+v(a^\dag))} generalizing the shift operator. In the course of these considerations we define and explore the monomiality principle and find its representations. We exploit the properties of Sheffer-type polynomials which constitute the inherent structure of this problem. In the end we give some examples illustrating the utility of the method and point out the relation to combinatorial structures.Comment: Presented at the 8'th International School of Theoretical Physics "Symmetry and Structural Properties of Condensed Matter " (SSPCM 2005), Myczkowce, Poland. 13 pages, 31 reference

    Characterizing the Shape of Activation Space in Deep Neural Networks

    Full text link
    The representations learned by deep neural networks are difficult to interpret in part due to their large parameter space and the complexities introduced by their multi-layer structure. We introduce a method for computing persistent homology over the graphical activation structure of neural networks, which provides access to the task-relevant substructures activated throughout the network for a given input. This topological perspective provides unique insights into the distributed representations encoded by neural networks in terms of the shape of their activation structures. We demonstrate the value of this approach by showing an alternative explanation for the existence of adversarial examples. By studying the topology of network activations across multiple architectures and datasets, we find that adversarial perturbations do not add activations that target the semantic structure of the adversarial class as previously hypothesized. Rather, adversarial examples are explainable as alterations to the dominant activation structures induced by the original image, suggesting the class representations learned by deep networks are problematically sparse on the input space

    struc2vec: Learning Node Representations from Structural Identity

    Full text link
    Structural identity is a concept of symmetry in which network nodes are identified according to the network structure and their relationship to other nodes. Structural identity has been studied in theory and practice over the past decades, but only recently has it been addressed with representational learning techniques. This work presents struc2vec, a novel and flexible framework for learning latent representations for the structural identity of nodes. struc2vec uses a hierarchy to measure node similarity at different scales, and constructs a multilayer graph to encode structural similarities and generate structural context for nodes. Numerical experiments indicate that state-of-the-art techniques for learning node representations fail in capturing stronger notions of structural identity, while struc2vec exhibits much superior performance in this task, as it overcomes limitations of prior approaches. As a consequence, numerical experiments indicate that struc2vec improves performance on classification tasks that depend more on structural identity.Comment: 10 pages, KDD2017, Research Trac

    A Lambda Term Representation Inspired by Linear Ordered Logic

    Get PDF
    We introduce a new nameless representation of lambda terms inspired by ordered logic. At a lambda abstraction, number and relative position of all occurrences of the bound variable are stored, and application carries the additional information where to cut the variable context into function and argument part. This way, complete information about free variable occurrence is available at each subterm without requiring a traversal, and environments can be kept exact such that they only assign values to variables that actually occur in the associated term. Our approach avoids space leaks in interpreters that build function closures. In this article, we prove correctness of the new representation and present an experimental evaluation of its performance in a proof checker for the Edinburgh Logical Framework. Keywords: representation of binders, explicit substitutions, ordered contexts, space leaks, Logical Framework.Comment: In Proceedings LFMTP 2011, arXiv:1110.668
    • …
    corecore