118 research outputs found
Online Adaptor Grammars with Hybrid Inference
Adaptor grammars are a flexible, powerful formalism for defining nonparametric, un-supervised models of grammar productions. This flexibility comes at the cost of expensive inference. We address the difficulty of infer-ence through an online algorithm which uses a hybrid of Markov chain Monte Carlo and variational inference. We show that this in-ference strategy improves scalability without sacrificing performance on unsupervised word segmentation and topic modeling tasks.
Models, Inference, and Implementation for Scalable Probabilistic Models of Text
Unsupervised probabilistic Bayesian models are powerful tools for statistical analysis, especially in the area of information retrieval, document analysis and text processing. Despite their success, unsupervised probabilistic Bayesian models are often slow in inference due to inter-entangled mutually dependent latent variables. In addition, the parameter space of these models is usually very large. As the data from various different media sources--for example, internet, electronic books, digital films, etc--become widely accessible, lack of scalability for these unsupervised probabilistic Bayesian models becomes a critical bottleneck.
The primary focus of this dissertation is to speed up the inference process in unsupervised probabilistic Bayesian models. There are two common solutions to scale the algorithm up to large data: parallelization or streaming. The former achieves scalability by distributing the data and the computation to multiple machines. The latter assumes data come in a stream and updates the model gradually after seeing each data observation. It is able to scale to larger datasets because it usually takes
only one pass over the entire data.
In this dissertation, we examine both approaches. We first demonstrate the effectiveness of the parallelization approach on a class of unsupervised Bayesian models--topic models, which are exemplified by latent Dirichlet allocation (LDA). We propose a fast parallel implementation using variational inference on the MapRe- duce framework, referred to as Mr. LDA. We show that parallelization enables topic models to handle significantly larger datasets. We further show that our implementation--unlike highly tuned and specialized implementations--is easily extensible. We demonstrate two extensions possible with this scalable framework: 1) informed priors to guide topic discovery and 2) extracting topics from a multilingual corpus.
We propose polylingual tree-based topic models to infer topics in multilingual corpora. We then propose three different inference methods to infer the latent variables. We examine the effectiveness of different inference methods on the task of machine translation in which we use the proposed model to extract domain knowledge that considers both source and target languages. We apply it on a large collection of aligned Chinese-English sentences and show that our model yields significant improvement on BLEU score over strong baselines.
Other than parallelization, another approach to deal with scalability is to learn parameters in an online streaming setting. Although many online algorithms have been proposed for LDA, they all overlook a fundamental but challenging problem-- the vocabulary is constantly evolving over time. To address this problem, we propose
an online LDA with infinite vocabulary--infvoc LDA. We derive online hybrid inference for our model and propose heuristics to dynamically order, expand, and contract the set of words in our vocabulary. We show that our algorithm is able to discover better topics by incorporating new words into the vocabulary and constantly refining the topics over time.
In addition to LDA, we also show generality of the online hybrid inference framework by applying it to adaptor grammars, which are a broader class of models subsuming LDA. With proper grammar rules, it simplifies to the exact LDA model, however, it provides more flexibility to alter or extend LDA with different grammar rules. We develop online hybrid inference for adaptor grammar, and show that our method discovers high-quality structure more quickly than both MCMC and variational inference methods
A summary of the 2012 JHU CLSP Workshop on Zero Resource Speech Technologies and Models of Early Language Acquisition
We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding zero resource (unsupervised) speech technologies and related models of early language acquisition. Centered around the tasks of phonetic and lexical discovery, we consider unified evaluation metrics, present two new approaches for improving speaker independence in the absence of supervision, and evaluate the application of Bayesian word segmentation algorithms to automatic subword unit tokenizations. Finally, we present two strategies for integrating zero resource techniques into supervised settings, demonstrating the potential of unsupervised methods to improve mainstream technologies.5 page(s
Recommended from our members
Unsupervised Morphological Segmentation and Part-of-Speech Tagging for Low-Resource Scenarios
With the high cost of manually labeling data and the increasing interest in low-resource languages, for which human annotators might not be even available, unsupervised approaches have become essential for processing a typologically diverse set of languages, whether high-resource or low-resource. In this work, we propose new fully unsupervised approaches for two tasks in morphology: unsupervised morphological segmentation and unsupervised cross-lingual part-of-speech (POS) tagging, which have been two essential subtasks for several downstream NLP applications, such as machine translation, speech recognition, information extraction and question answering.
We propose a new unsupervised morphological-segmentation approach that utilizes Adaptor Grammars (AGs), nonparametric Bayesian models that generalize probabilistic context-free grammars (PCFGs), where a PCFG models word structure in the task of morphological segmentation. We implement the approach as a publicly available morphological-segmentation framework, MorphAGram, that enables unsupervised morphological segmentation through the use of several proposed language-independent grammars. In addition, the framework allows for the use of scholar knowledge, when available, in the form of affixes that can be seeded into the grammars. The framework handles the cases when the scholar-seeded knowledge is either generated from language resources, possibly by someone who does not know the language, as weak linguistic priors, or generated by an expert in the underlying language as strong linguistic priors. Another form of linguistic priors is the design of a grammar that models language-dependent specifications. We also propose a fully unsupervised learning setting that approximates the effect of scholar-seeded knowledge through self-training. Moreover, since there is no single grammar that works best across all languages, we propose an approach that picks a nearly optimal configuration (a learning setting and a grammar) for an unseen language, a language that is not part of the development. Finally, we examine multilingual learning for unsupervised morphological segmentation in low-resource setups.
For unsupervised POS tagging, two cross-lingual approaches have been widely adapted: 1) annotation projection, where POS annotations are projected across an aligned parallel text from a source language for which a POS tagger is accessible to the target one prior to training a POS model; and 2) zero-shot model transfer, where a model of a source language is directly applied on texts in the target language. We propose an end-to-end architecture for unsupervised cross-lingual POS tagging via annotation projection in truly low-resource scenarios that do not assume access to parallel corpora that are large in size or represent a specific domain. We integrate and expand the best practices in alignment and projection and design a rich neural architecture that exploits non-contextualized and transformer-based contextualized word embeddings, affix embeddings and word-cluster embeddings. Additionally, since parallel data might be available between the target language and multiple source ones, as in the case of the Bible, we propose different approaches for learning from multiple sources. Finally, we combine our work on unsupervised morphological segmentation and unsupervised cross-lingual POS tagging by conducting unsupervised stem-based cross-lingual POS tagging via annotation projection, which relies on the stem as the core unit of abstraction for alignment and projection, which is beneficial to low-resource morphologically complex languages. We also examine morpheme-based alignment and projection, the use of linguistic priors towards better POS models and the use of segmentation information as learning features in the neural architecture.
We conduct comprehensive evaluation and analysis to assess the performance of our approaches of unsupervised morphological segmentation and unsupervised POS tagging and show that they achieve the state-of-the-art performance for the two morphology tasks when evaluated on a large set of languages of different typologies: analytic, fusional, agglutinative and synthetic/polysynthetic
A computational framework of human causal generalization
How do people decide how general a causal relationship is, in terms of the entities or
situations it applies to? How can people make these difficult judgments in a fast, efficient
way? To address these questions, I designed a novel online experiment interface
that systematically measures how people generalize causal relationships, and developed
a computational modeling framework that combines program induction (about the hidden
causal laws) with non-parametric category inference (about their domains of influence)
to account for unique patterns in human causal generalization. In particular, by
introducing adaptor grammars to standard Bayesian-symbolic models, this framework
formalizes conceptual bootstrapping as a general online inference algorithm that gives
rise to compositional causal concepts.
Chapter 2 investigates one-shot causal generalization, where I find that participants’
inferences are shaped by the order of the generalization questions they are asked. Chapter
3 looks into few-shot cases, and finds an asymmetry in the formation of causal categories:
participants preferentially identify causal laws with features of the agent objects
rather than recipients, but this asymmetry disappears when visual cues to causal agency
are challenged. The proposed modeling approach can explain both the generalizationorder
effect and the causal asymmetry, outperforming a naïve Bayesian account while
providing a computationally plausible mechanism for real-world causal generalization.
Chapter 4 further extends this framework with adaptor grammars, using a dynamic conceptual
repertoire that is enriched over time, allowing the model to cache and later
reuse elements of earlier insights. This model predicts systematically different learned
concepts when the same evidence is processed in different orders, and across four experiments
people’s learning outcomes indeed closely resembled this model’s, differing
significantly from alternative accounts
Investigating Language Impact in Bilingual Approaches for Computational Language Documentation
For endangered languages, data collection campaigns have to accommodate the
challenge that many of them are from oral tradition, and producing
transcriptions is costly. Therefore, it is fundamental to translate them into a
widely spoken language to ensure interpretability of the recordings. In this
paper we investigate how the choice of translation language affects the
posterior documentation work and potential automatic approaches which will work
on top of the produced bilingual corpus. For answering this question, we use
the MaSS multilingual speech corpus (Boito et al., 2020) for creating 56
bilingual pairs that we apply to the task of low-resource unsupervised word
segmentation and alignment. Our results highlight that the choice of language
for translation influences the word segmentation performance, and that
different lexicons are learned by using different aligned translations. Lastly,
this paper proposes a hybrid approach for bilingual word segmentation,
combining boundary clues extracted from a non-parametric Bayesian model
(Goldwater et al., 2009a) with the attentional word segmentation neural model
from Godard et al. (2018). Our results suggest that incorporating these clues
into the neural models' input representation increases their translation and
alignment quality, specially for challenging language pairs.Comment: Accepted to 1st Joint SLTU and CCURL Worksho
Hierarchical Bayesian Nonparametric Models for Power-Law Sequences
Sequence data that exhibits power-law behavior in its marginal and conditional distributions arises frequently from natural processes, with natural language text being a prominent example. We study probabilistic models for such sequences based on a hierarchical non-parametric Bayesian prior, develop inference and learning procedures for making these models useful in practice and applicable to large, real-world data sets, and empirically demonstrate their excellent predictive performance. In particular, we consider models based on the infinite-depth variant of the hierarchical Pitman-Yor process (HPYP) language model [Teh, 2006b] known as the Sequence Memoizer, as well as Sequence Memoizer-based cache language models and hybrid models combining the HPYP with neural language models. We empirically demonstrate that these models performwell on languagemodelling and data compression tasks
- …