43 research outputs found
An Unsolicited Soliloquy on Dependency Parsing
Programa Oficial de Doutoramento en Computación . 5009V01[Abstract]
This thesis presents work on dependency parsing covering two distinct lines of research. The
first aims to develop efficient parsers so that they can be fast enough to parse large amounts
of data while still maintaining decent accuracy. We investigate two techniques to achieve
this. The first is a cognitively-inspired method and the second uses a model distillation
method. The first technique proved to be utterly dismal, while the second was somewhat of
a success.
The second line of research presented in this thesis evaluates parsers. This is also done in
two ways. We aim to evaluate what causes variation in parsing performance for different
algorithms and also different treebanks. This evaluation is grounded in dependency displacements
(the directed distance between a dependent and its head) and the subsequent
distributions associated with algorithms and the distributions found in treebanks. This work
sheds some light on the variation in performance for both different algorithms and different
treebanks. And the second part of this area focuses on the utility of part-of-speech tags
when used with parsing systems and questions the standard position of assuming that they
might help but they certainly won’t hurt.[Resumen]
Esta tesis presenta trabajo sobre análisis de dependencias que cubre dos líneas de investigación distintas. La primera tiene como objetivo desarrollar analizadores eficientes, de
modo que sean suficientemente rápidos como para analizar grandes volúmenes de datos y,
al mismo tiempo, sean suficientemente precisos. Investigamos dos métodos. El primero se
basa en teorías cognitivas y el segundo usa una técnica de destilación. La primera técnica
resultó un enorme fracaso, mientras que la segunda fue en cierto modo un ´éxito.
La otra línea evalúa los analizadores sintácticos. Esto también se hace de dos maneras. Evaluamos
la causa de la variación en el rendimiento de los analizadores para distintos algoritmos
y corpus. Esta evaluación utiliza la diferencia entre las distribuciones del desplazamiento
de arista (la distancia dirigida de las aristas) correspondientes a cada algoritmo y corpus.
También evalúa la diferencia entre las distribuciones del desplazamiento de arista en los
datos de entrenamiento y prueba. Este trabajo esclarece las variaciones en el rendimiento
para algoritmos y corpus diferentes. La segunda parte de esta línea investiga la utilidad de
las etiquetas gramaticales para los analizadores sintácticos.[Resumo]
Esta tese presenta traballo sobre análise sintáctica, cubrindo dúas liñas de investigación. A
primeira aspira a desenvolver analizadores eficientes, de maneira que sexan suficientemente
rápidos para procesar grandes volumes de datos e á vez sexan precisos. Investigamos dous
métodos. O primeiro baséase nunha teoría cognitiva, e o segundo usa unha técnica de
destilación. O primeiro método foi un enorme fracaso, mentres que o segundo foi en certo
modo un éxito.
A outra liña avalúa os analizadores sintácticos. Esto tamén se fai de dúas maneiras. Avaliamos
a causa da variación no rendemento dos analizadores para distintos algoritmos e corpus. Esta
avaliaci´on usa a diferencia entre as distribucións do desprazamento de arista (a distancia
dirixida das aristas) correspondentes aos algoritmos e aos corpus. Tamén avalía a diferencia
entre as distribucións do desprazamento de arista nos datos de adestramento e proba.
Este traballo esclarece as variacións no rendemento para algoritmos e corpus diferentes. A
segunda parte desta liña investiga a utilidade das etiquetas gramaticais para os analizadores
sintácticos.This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150) and from the Centro de Investigación de Galicia (CITIC) which is funded by the Xunta de Galicia and the European Union (ERDF - Galicia 2014-2020 Program) by grant ED431G 2019/01.Xunta de Galicia; ED431G 2019/0
Viability of Sequence Labeling Encodings for Dependency Parsing
Programa Oficial de Doutoramento en Computación . 5009V01[Abstract]
This thesis presents new methods for recasting dependency parsing as
a sequence labeling task yielding a viable alternative to the traditional
transition- and graph-based approaches. It is shown that sequence labeling
parsers provide several advantages for dependency parsing, such
as: (i) a good trade-off between accuracy and parsing speed, (ii) genericity
which enables running a parser in generic sequence labeling software
and (iii) pluggability which allows using full parse trees as features to
downstream tasks.
The backbone of dependency parsing as sequence labeling are the encodings
which serve as linearization methods for mapping dependency
trees into discrete labels, such that each token in a sentence is associated
with a label. We introduce three encoding families comprising: (i)
head selection, (ii) bracketing-based and (iii) transition-based encodings
which are differentiated by the way they represent a dependency
tree as a sequence of labels. We empirically examine the viability of
the encodings and provide an analysis of their facets.
Furthermore, we explore the feasibility of leveraging external complementary
data in order to enhance parsing performance. Our sequence
labeling parser is endowed with two kinds of representations. First,
we exploit the complementary nature of dependency and constituency
parsing paradigms and enrich the parser with representations from both
syntactic abstractions. Secondly, we use human language processing
data to guide our parser with representations from eye movements.
Overall, the results show that recasting dependency parsing as sequence
labeling is a viable approach that is fast and accurate and provides
a practical alternative for integrating syntax in NLP tasks.[Resumen]
Esta tesis presenta nuevos métodos para reformular el análisis sintáctico
de dependencias como una tarea de etiquetado secuencial, lo
que supone una alternativa viable a los enfoques tradicionales basados
en transiciones y grafos. Se demuestra que los analizadores de etiquetado
secuencial ofrecen varias ventajas para el análisis sintáctico de
dependencias, como por ejemplo (i) un buen equilibrio entre la precisión
y la velocidad de análisis, (ii) la genericidad que permite ejecutar
un analizador en un software genérico de etiquetado secuencial y (iii)
la conectividad que permite utilizar el árbol de análisis completo como
características para las tareas posteriores.
El pilar del análisis sintáctico de dependencias como etiquetado secuencial
son las codificaciones que sirven como métodos de linealización
para transformar los árboles de dependencias en etiquetas discretas, de
forma que cada token de una frase se asocia con una etiqueta. Introducimos
tres familias de codificación que comprenden: (i) selección de
núcleos, (ii) codificaciones basadas en corchetes y (iii) codificaciones basadas
en transiciones que se diferencian por la forma en que representan
un árbol de dependencias como una secuencia de etiquetas. Examinamos
empíricamente la viabilidad de las codificaciones y ofrecemos un
análisis de sus facetas.
Además, exploramos la viabilidad de aprovechar datos complementarios
externos para mejorar el rendimiento del análisis sintáctico. Dotamos
a nuestro analizador sintáctico de dos tipos de representaciones. En
primer lugar, explotamos la naturaleza complementaria de los paradigmas
de análisis sintáctico de dependencias y constituyentes, enriqueciendo
el analizador sintáctico con representaciones de ambas abstracciones
sintácticas. En segundo lugar, utilizamos datos de procesamiento del
lenguaje humano para guiar nuestro analizador con representaciones de
los movimientos oculares.
En general, los resultados muestran que la reformulación del análisis
sintáctico de dependencias como etiquetado de secuencias es un enfoque
viable, rápido y preciso, y ofrece una alternativa práctica para integrar
la sintaxis en las tareas de PLN.[Resumo]
Esta tese presenta novos métodos para reformular a análise sintáctica
de dependencias como unha tarefa de etiquetaxe secuencial, o que
supón unha alternativa viable aos enfoques tradicionais baseados en
transicións e grafos. Demóstrase que os analizadores de etiquetaxe secuencial
ofrecen varias vantaxes para a análise sintáctica de dependencias,
por exemplo (i) un bo equilibrio entre a precisión e a velocidade
de análise, (ii) a xenericidade que permite executar un analizador nun
software xenérico de etiquetaxe secuencial e (iii) a conectividade que
permite empregar a árbore de análise completa como características
para as tarefas posteriores.
O piar da análise sintáctica de dependencias como etiquetaxe secuencial
son as codificacións que serven como métodos de linealización para
transformar as árbores de dependencias en etiquetas discretas, de forma
que cada token dunha frase se asocia cunha etiqueta. Introducimos
tres familias de codificación que comprenden: (i) selección de núcleos,
(ii) codificacións baseadas en corchetes e (iii) codificacións baseadas en
transicións que se diferencian pola forma en que representan unha árbore
de dependencia como unha secuencia de etiquetas. Examinamos
empíricamente a viabilidade das codificacións e ofrecemos unha análise
das súas facetas.
Ademais, exploramos a viabilidade de aproveitar datos complementarios
externos para mellorar o rendemento da análise sintáctica. O noso
analizador sintáctico de etiquetaxe secuencial está dotado de dous tipos
de representacións. En primeiro lugar, explotamos a natureza complementaria
dos paradigmas de análise sintáctica de dependencias e constituíntes
e enriquecemos o analizador sintáctico con representacións de
ambas abstraccións sintácticas. En segundo lugar, empregamos datos
de procesamento da linguaxe humana para guiar o noso analizador con
representacións dos movementos oculares.
En xeral, os resultados mostran que a reformulación da análise sintáctico
de dependencias como etiquetaxe de secuencias é un enfoque
viable, rápido e preciso, e ofrece unha alternativa práctica para integrar
a sintaxe nas tarefas de PLN.This work has been carried out thanks to the funding from
the European Research Council (ERC), under the European Union’s
Horizon 2020 research and innovation programme (FASTPARSE, grant
agreement No 714150)
Character-based Neural Semantic Parsing
Humans and computers do not speak the same language. A lot of day-to-day tasks would be vastly more efficient if we could communicate with computers using natural language instead of relying on an interface. It is necessary, then, that the computer does not see a sentence as a collection of individual words, but instead can understand the deeper, compositional meaning of the sentence. A way to tackle this problem is to automatically assign a formal, structured meaning representation to each sentence, which are easy for computers to interpret. There have been quite a few attempts at this before, but these approaches were usually heavily reliant on predefined rules, word lists or representations of the syntax of the text. This made the general usage of these methods quite complicated. In this thesis we employ an algorithm that can learn to automatically assign meaning representations to texts, without using any such external resource. Specifically, we use a type of artificial neural network called a sequence-to-sequence model, in a process that is often referred to as deep learning. The devil is in the details, but we find that this type of algorithm can produce high quality meaning representations, with better performance than the more traditional methods. Moreover, a main finding of the thesis is that, counter intuitively, it is often better to represent the text as a sequence of individual characters, and not words. This is likely the case because it helps the model in dealing with spelling errors, unknown words and inflections
Multiword expressions at length and in depth
The annual workshop on multiword expressions takes place since 2001 in conjunction with major computational linguistics conferences and attracts the attention of an ever-growing community working on a variety of languages, linguistic phenomena and related computational processing issues. MWE 2017 took place in Valencia, Spain, and represented a vibrant panorama of the current research landscape on the computational treatment of multiword expressions, featuring many high-quality submissions. Furthermore, MWE 2017 included the first shared task on multilingual identification of verbal multiword expressions. The shared task, with extended communal work, has developed important multilingual resources and mobilised several research groups in computational linguistics worldwide. This book contains extended versions of selected papers from the workshop. Authors worked hard to include detailed explanations, broader and deeper analyses, and new exciting results, which were thoroughly reviewed by an internationally renowned committee. We hope that this distinctly joint effort will provide a meaningful and useful snapshot of the multilingual state of the art in multiword expressions modelling and processing, and will be a point point of reference for future work
Luonnollisen kielen syntaksin parsiminen neuroverkoilla
Conversational user interfaces have made strong appearance during the last couple of years. Most growth with conversational UIs can be seen with customer service moving from phone to chat. As big as the hype surrounding the conversational user interfaces is, they often cannot surpass traditional alternatives for the use cases they are used without actually understanding the language user speaks. This has crated a large demand for artificial intelligence (AI) which can understand users in their natural language.
Natural language used by humans is tremendously complex without people actually realizing that. Understanding something this complex often requires system which divides the problem into smaller sub-problems and tries to tackle those, a divide and conquer paradigm. Natural language processing tools are often built as pipelines where more information is mined from the text in each step. Finding syntactic features, such as part of speech and lemma, is one such step, and is the focus of this thesis.
Main objective for this thesis was to build a neural network architecture which can classify lemmas and parts of speech for the input text. Research hypothesis was then to determine if such architecture could be modified to do the both tasks at the same time and if such change would improve classiication performance of the model.
Doing experiments with Finnish Universal Dependencies dataset revealed that lemmatization benefits from jointly learning to POS-tag, but POS-tagging performance could not be improved. Best absolute lemmatization results were gained by using correct POS-tags as input features, but since they are not available for live predictions the result has no practical meaning
General methods for fine-grained morphological and syntactic disambiguation
We present methods for improved handling of morphologically
rich languages (MRLS) where we define
MRLS as languages that
are morphologically more complex than English. Standard
algorithms for language modeling, tagging and parsing have
problems with the productive nature of such
languages. Consider for example the possible forms of a
typical English verb like work that generally has four
four different
forms: work, works, working
and worked. Its Spanish counterpart trabajar
has 6 different forms in present
tense: trabajo, trabajas, trabaja, trabajamos, trabajáis
and trabajan and more than 50 different forms when
including the different tenses, moods (indicative,
subjunctive and imperative) and participles. Such a high
number of forms leads to sparsity issues: In a recent
Wikipedia dump of more than 400 million tokens we find that
20 of these forms occur only twice or less and that 10 forms
do not occur at all. This means that even if we only need
unlabeled data to estimate a model and even when looking at
a relatively common and frequent verb, we do not have enough
data to make reasonable estimates for some of its
forms. However, if we decompose an unseen form such
as trabajaréis `you will work', we find that it
is trabajar in future tense and second person
plural. This allows us to make the predictions that are
needed to decide on the grammaticality (language modeling)
or syntax (tagging and parsing) of a sentence.
In the first part of this thesis, we develop
a morphological language model. A language model
estimates the grammaticality and coherence of a
sentence. Most language models used today are word-based
n-gram models, which means that they estimate the
transitional probability of a word following a history, the
sequence of the (n - 1) preceding words. The probabilities
are estimated from the frequencies of the history and the
history followed by the target word in a huge text
corpus. If either of the sequences is unseen, the length of
the history has to be reduced. This leads to a less accurate
estimate as less context is taken into account.
Our morphological language model estimates an additional
probability from the morphological classes of the
words. These classes are built automatically by extracting
morphological features from the word forms. To this end, we
use unsupervised segmentation algorithms to find the
suffixes of word forms. Such an algorithm might for example
segment trabajaréis into trabaja
and réis and we can then estimate the properties
of trabajaréis from other word forms with the same or
similar morphological properties. The data-driven nature of
the segmentation algorithms allows them to not only find
inflectional suffixes (such as -réis), but also more
derivational phenomena such as the head nouns of compounds
or even endings such as -tec, which identify
technology oriented companies such
as Vortec, Memotec and Portec and would
not be regarded as a morphological suffix by traditional
linguistics. Additionally, we extract shape features such as
if a form contains digits or capital characters. This is
important because many rare or unseen forms are proper
names or numbers and often do not have meaningful
suffixes. Our class-based morphological model is then
interpolated with a word-based model to combine the
generalization capabilities of the first and the high
accuracy in case of sufficient data of the second.
We evaluate our model across 21 European languages and find
improvements between 3% and 11% in perplexity, a standard
language modeling evaluation measure. Improvements are
highest for languages with more productive and complex
morphology such as Finnish and Estonian, but also visible
for languages with a relatively simple morphology such as
English and Dutch. We conclude that a morphological
component yields consistent improvements for all the tested
languages and argue that it should be part of every language
model.
Dependency trees represent the syntactic structure of a
sentence by attaching each word to its syntactic head, the
word it is directly modifying. Dependency parsing
is usually tackled using heavily lexicalized (word-based)
models and a thorough morphological preprocessing is
important for optimal performance, especially for MRLS. We
investigate if the lack of morphological features can be
compensated by features induced using hidden Markov
models with latent annotations (HMM-LAs)
and find this to be the case for German. HMM-LAs were
proposed as a method to increase part-of-speech tagging
accuracy. The model splits the observed part-of-speech tags
(such as verb and noun) into subtags. An expectation
maximization algorithm is then used to fit the subtags to
different roles. A verb tag for example might be split into
an auxiliary verb and a full verb subtag. Such a split is
usually beneficial because these two verb classes have
different contexts. That is, a full verb might follow an
auxiliary verb, but usually not another full verb.
For German and English, we find that our model leads to
consistent improvements over a parser
not using subtag features. Looking at the labeled attachment
score (LAS), the number of words correctly attached to their head,
we observe an improvement from 90.34 to 90.75 for English
and from 87.92 to 88.24 for German. For German, we
additionally find that our model achieves almost the same
performance (88.24) as a model using tags annotated by a
supervised morphological tagger (LAS of 88.35). We also find
that the German latent tags correlate with
morphology. Articles for example are split by their
grammatical case.
We also investigate the part-of-speech tagging accuracies of
models using the traditional treebank tagset and models
using induced tagsets of the same size and find that the
latter outperform the former, but are in turn outperformed
by a discriminative tagger.
Furthermore, we present a method for fast and
accurate morphological tagging. While
part-of-speech tagging annotates tokens in context with
their respective word categories, morphological tagging
produces a complete annotation containing all the relevant
inflectional features such as case, gender and tense. A
complete reading is represented as a single tag. As a
reading might consist of several morphological features the
resulting tagset usually contains hundreds or even thousands
of tags. This is an issue for many decoding algorithms such
as Viterbi which have runtimes depending quadratically on
the number of tags. In the case of morphological tagging,
the problem can be avoided by using a morphological
analyzer. A morphological analyzer is a manually created
finite-state transducer that produces the possible
morphological readings of a word form. This analyzer can be
used to prune the tagging lattice and to allow for the
application of standard sequence labeling algorithms. The
downside of this approach is that such an analyzer is not
available for every language or might not have the coverage
required for the task. Additionally, the output tags of some
analyzers are not compatible with the annotations of the
treebanks, which might require some manual mapping of the
different annotations or even to reduce the complexity of
the annotation.
To avoid this problem we propose to use the posterior
probabilities of a conditional random field (CRF)
lattice to prune the space of possible
taggings. At the zero-order level the posterior
probabilities of a token can be calculated independently
from the other tokens of a sentence. The necessary
computations can thus be performed in linear time. The
features available to the model at this time are similar to
the features used by a morphological analyzer (essentially
the word form and features based on it), but also include
the immediate lexical context. As the ambiguity of word
types varies substantially, we just fix the average number of
readings after pruning by dynamically estimating a
probability threshold. Once we obtain the pruned lattice, we
can add tag transitions and convert it into a first-order
lattice. The quadratic forward-backward computations are now
executed on the remaining plausible readings and thus
efficient. We can now continue pruning and extending the
lattice order at a relatively low additional runtime cost
(depending on the pruning thresholds). The training of the
model can be implemented efficiently by applying stochastic
gradient descent (SGD). The CRF gradient can be calculated
from a lattice of any order as long as the correct reading
is still in the lattice. During training, we thus run the
lattice pruning until we either reach the maximal order or
until the correct reading is pruned. If the reading is
pruned we perform the gradient update with the highest order
lattice still containing the reading. This approach is
similar to early updating in the structured perceptron
literature and forces the model to learn how to keep the
correct readings in the lower order lattices. In practice,
we observe a high number of lower updates during the first
training epoch and almost exclusively higher order updates
during later epochs.
We evaluate our CRF tagger on six languages with different
morphological properties. We find that for languages with a
high word form ambiguity such as German, the pruning results
in a moderate drop in tagging accuracy while for languages
with less ambiguity such as Spanish and Hungarian the loss
due to pruning is negligible. However, our pruning strategy
allows us to train higher order models (order > 1), which give
substantial improvements for all languages and also
outperform unpruned first-order models. That is, the model
might lose some of the correct readings during pruning, but
is also able to solve more of the harder cases that require
more context. We also find our model to substantially and
significantly outperform a number of frequently used taggers
such as Morfette and SVMTool.
Based on our morphological tagger we develop a simple method
to increase the performance of a state-of-the-art
constituency parser. A constituency tree
describes the syntactic properties of a sentence by
assigning spans of text to a hierarchical bracket
structure. developed a
language-independent approach for the automatic annotation
of accurate and compact grammars. Their implementation --
known as the Berkeley parser -- gives state-of-the-art results
for many languages such as English and German. For some MRLS
such as Basque and Korean, however, the parser gives
unsatisfactory results because of its simple unknown word
model. This model maps unknown words to a small number of
signatures (similar to our morphological classes). These
signatures do not seem expressive enough for many of the
subtle distinctions made during parsing. We propose to
replace rare words by the morphological reading generated by
our tagger instead. The motivation is twofold. First, our
tagger has access to a number of lexical and sublexical
features not available during parsing. Second, we expect
the morphological readings to contain most of the
information required to make the correct parsing decision
even though we know that things such as the correct
attachment of prepositional phrases might require some
notion of lexical semantics.
In experiments on the SPMRL 2013 dataset
of nine MRLS we find our method to give improvements for all
languages except French for which we observe a minor drop in
the Parseval score of 0.06. For Hebrew, Hungarian and
Basque we find substantial absolute improvements of 5.65,
11.87 and 15.16, respectively.
We also performed an extensive evaluation on the utility of
word representations for morphological tagging. Our goal was
to reduce the drop in performance that is caused when a
model trained on a specific domain is applied to some other
domain. This problem is usually addressed by domain adaption
(DA). DA adapts a model towards a specific domain using a
small amount of labeled or a huge amount of unlabeled data
from that domain. However, this procedure requires us to
train a model for every target domain. Instead we are trying
to build a robust system that is trained on domain-specific
labeled and domain-independent or general unlabeled data. We
believe word representations to be key in the development of
such models because they allow us to leverage unlabeled
data efficiently. We compare data-driven representations to
manually created morphological analyzers. We understand
data-driven representations as models that cluster word
forms or map them to a vectorial representation. Examples
heavily used in the literature include Brown clusters,
Singular Value Decompositions of count
vectors and neural-network-based
embeddings. We create a test suite of
six languages consisting of in-domain and out-of-domain test
sets. To this end we converted annotations for Spanish and
Czech and annotated the German part of the Smultron
treebank with a morphological layer. In
our experiments on these data sets we find Brown clusters to
outperform the other data-driven representations. Regarding
the comparison with morphological analyzers, we find Brown
clusters to give slightly better performance in
part-of-speech tagging, but to be substantially outperformed
in morphological tagging
Implicit emotion detection in text
In text, emotion can be expressed explicitly, using emotion-bearing words (e.g. happy, guilty) or implicitly without emotion-bearing words. Existing approaches focus on the detection of explicitly expressed emotion in text. However, there are various ways to express and convey emotions without the use of these emotion-bearing words. For example, given two sentences: “The outcome of my exam makes me happy” and “I passed my exam”, both sentences express happiness, with the first expressing it explicitly and the other implying it. In this thesis, we investigate implicit emotion detection in text. We propose a rule-based approach for implicit emotion detection, which can be used without labeled corpora for training. Our results show that our approach outperforms the lexicon matching method consistently and gives competitive performance in comparison to supervised classifiers. Given that emotions such as guilt and admiration which often require the identification of blameworthiness and praiseworthiness, we also propose an approach for the detection of blame and praise in text, using an adapted psychology model, Path model to blame. Lack of benchmarking dataset led us to construct a corpus containing comments of individuals’ emotional experiences annotated as blame, praise or others. Since implicit emotion detection might be useful for conflict-of-interest (CoI) detection in Wikipedia articles, we built a CoI corpus and explored various features including linguistic and stylometric, presentation, bias and emotion features. Our results show that emotion features are important when using Nave Bayes, but the best performance is obtained with SVM on linguistic and stylometric features only. Overall, we show that a rule-based approach can be used to detect implicit emotion in the absence of labelled data; it is feasible to adopt the psychology path model to blame for blame/praise detection from text, and implicit emotion detection is beneficial for CoI detection in Wikipedia articles
Modeling Dependencies in Natural Languages with Latent Variables
In this thesis, we investigate the use of latent variables to model complex dependencies in natural languages. Traditional models, which have a fixed parameterization, often make strong independence assumptions that lead to poor performance. This problem is often addressed by incorporating additional dependencies into the model (e.g., using higher order N-grams for language modeling). These added dependencies can increase data sparsity and/or require expert knowledge, together with trial and error, in order to identify and incorporate the most important dependencies (as in lexicalized parsing models). Traditional models, when developed for a particular genre, domain, or language, are also often difficult to adapt to another.
In contrast, previous work has shown that latent variable models, which automatically learn dependencies in a data-driven way, are able to flexibly adjust the number of parameters based on the type and the amount of training data available. We have created several different types of latent variable models for a diverse set of natural language processing applications, including novel models for part-of-speech tagging, language modeling, and machine translation, and an improved model for parsing. These models perform significantly better than traditional models. We have also created and evaluated three different methods for improving the performance of latent variable models. While these methods can be applied to any of our applications, we focus our experiments on parsing.
The first method involves self-training, i.e., we train models using a combination of gold standard training data and a large amount of automatically labeled training data. We conclude from a series of experiments that the latent variable models benefit much more from self-training than conventional models, apparently due to their flexibility to adjust their model parameterization to learn more accurate models from the additional automatically labeled training data.
The second method takes advantage of the variability among latent variable models to combine multiple models for enhanced performance. We investigate several different training protocols to combine self-training with model combination. We conclude that these two techniques are complementary to each other and can be effectively combined to train very high quality parsing models.
The third method replaces the generative multinomial lexical model of latent variable grammars with a feature-rich log-linear lexical model to provide a principled solution to address data sparsity, handle out-of-vocabulary words, and exploit overlapping features during model induction. We conclude from experiments that the resulting grammars are able to effectively parse three different languages.
This work contributes to natural language processing by creating flexible and effective latent variable models for several different languages. Our investigation of self-training, model combination, and log-linear models also provides insights into the effective application of these machine learning techniques to other disciplines