697 research outputs found

    Semantic Tagging with Deep Residual Networks

    Get PDF
    We propose a novel semantic tagging task, sem-tagging, tailored for the purpose of multilingual semantic parsing, and present the first tagger using deep residual networks (ResNets). Our tagger uses both word and character representations and includes a novel residual bypass architecture. We evaluate the tagset both intrinsically on the new task of semantic tagging, as well as on Part-of-Speech (POS) tagging. Our system, consisting of a ResNet and an auxiliary loss function predicting our semantic tags, significantly outperforms prior results on English Universal Dependencies POS tagging (95.71% accuracy on UD v1.2 and 95.67% accuracy on UD v1.3).Comment: COLING 2016, camera ready versio

    Expressive Power of Abstract Meaning Representations

    Get PDF
    The syntax of abstract meaning representations (AMRs) can be defined recursively, and a systematic translation to first-order logic (FOL) can be specified, including a proper treatment of negation. AMRs without recurrent variables are in the decidable two-variable fragment of FOL. The current definition of AMRs has limited expressive power for universal quantification (up to one universal quantifier per sentence). A simple extension of the AMR syntax and translation to FOL provides the means to represent projection and scope phenomena. </jats:p

    Convergence guarantees for forward gradient descent in the linear regression model

    Get PDF
    Renewed interest in the relationship between artificial and biological neural networks motivates the study of gradient-free methods. Considering the linear regression model with random design, we theoretically analyze in this work the biologically motivated (weight-perturbed) forward gradient scheme that is based on random linear combination of the gradient. If d denotes the number of parameters and k the number of samples, we prove that the mean squared error of this method converges for k≳d2log(d) with rate d2log(d)/k. Compared to the dimension dependence d for stochastic gradient descent, an additional factor dlog(d) occurs.</p

    The Meaning Factory at SemEval-2017 Task 9: Producing AMRs with Neural Semantic Parsing

    Get PDF

    Convergence guarantees for forward gradient descent in the linear regression model

    Full text link
    Renewed interest in the relationship between artificial and biological neural networks motivates the study of gradient-free methods. Considering the linear regression model with random design, we theoretically analyze in this work the biologically motivated (weight-perturbed) forward gradient scheme that is based on random linear combination of the gradient. If d denotes the number of parameters and k the number of samples, we prove that the mean squared error of this method converges for kd2log(d)k\gtrsim d^2\log(d) with rate d2log(d)/k.d^2\log(d)/k. Compared to the dimension dependence d for stochastic gradient descent, an additional factor dlog(d)d\log(d) occurs.Comment: 15 page
    corecore