4 research outputs found

    Syntactic Topic Models

    Full text link
    The syntactic topic model (STM) is a Bayesian nonparametric model of language that discovers latent distributions of words (topics) that are both semantically and syntactically coherent. The STM models dependency parsed corpora where sentences are grouped into documents. It assumes that each word is drawn from a latent topic chosen by combining document-level features and the local syntactic context. Each document has a distribution over latent topics, as in topic models, which provides the semantic consistency. Each element in the dependency parse tree also has a distribution over the topics of its children, as in latent-state syntax models, which provides the syntactic consistency. These distributions are convolved so that the topic of each word is likely under both its document and syntactic context. We derive a fast posterior inference algorithm based on variational methods. We report qualitative and quantitative studies on both synthetic data and hand-parsed documents. We show that the STM is a more predictive model of language than current models based only on syntax or only on topics

    Latent-Variable Synchronous CFGs for Hierarchical Translation

    Get PDF
    Data-driven refinement of non-terminal categories has been demonstrated to be a reliable technique for improving mono-lingual parsing with PCFGs. In this pa-per, we extend these techniques to learn latent refinements of single-category syn-chronous grammars, so as to improve translation performance. We compare two estimators for this latent-variable model: one based on EM and the other is a spec-tral algorithm based on the method of mo-ments. We evaluate their performance on a Chinese–English translation task. The re-sults indicate that we can achieve signifi-cant gains over the baseline with both ap-proaches, but in particular the moments-based estimator is both faster and performs better than EM.

    Bayesian Synchronous Grammar Induction

    No full text
    We present a novel method for inducing synchronous context free grammars (SCFGs) from a corpus of parallel string pairs. SCFGs can model equivalence between strings in terms of substitutions, insertions and deletions, and the reordering of sub-strings. We develop a non-parametric Bayesian model and apply it to a machine translation task, using priors to replace the various heuristics commonly used in this field. Using a variational Bayes training procedure, we learn the latent structure of translation equivalence through the induction of synchronous grammar categories for phrasal translations, showing improvements in translation performance over maximum likelihood models.

    Bayesian Synchronous Grammar Induction.

    No full text
    We present a novel method for inducing synchronous context free grammars (SCFGs) from a corpus of parallel string pairs. SCFGs can model equivalence between strings in terms of substitutions, insertions and deletions, and the reordering of sub-strings. We develop a non-parametric Bayesian model and apply it to a machine translation task, using priors to replace the various heuristics commonly used in this field. Using a variational Bayes training procedure, we learn the latent structure of translation equivalence through the induction of synchronous grammar categories for phrasal translations, showing improvements in translation performance over maximum likelihood models
    corecore