54 research outputs found
An Arabic CCG approach for determining constituent types from Arabic Treebank
AbstractConverting a treebank into a CCGbank opens the respective language to the sophisticated tools developed for Combinatory Categorial Grammar (CCG) and enriches cross-linguistic development. The conversion is primarily a three-step process: determining constituentsâ types, binarization, and category conversion. Usually, this process involves a preprocessing step to the Treebank of choice for correcting brackets and normalizing tags for any changes that were introduced during the manual annotation, as well as extracting morpho-syntactic information that is necessary for determining constituentsâ types. In this article, we describe the required preprocessing step on the Arabic Treebank, as well as how to determine Arabic constituentsâ types. We conducted an experiment on parts 1 and 2 of the Penn Arabic Treebank (PATB) aimed at converting the PATB into an Arabic CCGbank. The performance of our algorithm when applied to ATB1v2.0 & ATB2v2.0 was 99% identification of head nodes and 100% coverage over the Treebank data
Unsupervised grammar induction with Combinatory Categorial Grammars
Language is a highly structured medium for communication. An idea starts in the speaker's mind (semantics) and is transformed into a well formed, intelligible, sentence via the specific syntactic rules of a language. We aim to discover the fingerprints of this process in the choice and location of words used in the final utterance. What is unclear is how much of this latent process can be discovered from the linguistic signal alone and how much requires shared non-linguistic context, knowledge, or cues.
Unsupervised grammar induction is the task of analyzing strings in a language to discover the latent syntactic structure of the language without access to labeled training data. Successes in unsupervised grammar induction shed light on the amount of syntactic structure that is discoverable from raw or part-of-speech tagged text. In this thesis, we present a state-of-the-art grammar induction system based on Combinatory Categorial Grammars. Our choice of syntactic formalism enables the first labeled evaluation of an unsupervised system. This allows us to perform an in-depth analysis of the systemâs linguistic strengths and weaknesses. In order to completely eliminate reliance on any supervised systems, we also examine how performance is affected when we use induced word clusters instead of gold-standard POS tags. Finally, we perform a semantic evaluation of induced grammars, providing unique insights into future directions for unsupervised grammar induction systems
Transition-based combinatory categorial grammar parsing for English and Hindi
Given a natural language sentence, parsing is the task of assigning it a grammatical
structure, according to the rules within a particular grammar formalism. Different
grammar formalisms like Dependency Grammar, Phrase Structure Grammar, Combinatory
Categorial Grammar, Tree Adjoining Grammar are explored in the literature for
parsing. For example, given a sentence like âJohn ate an appleâ, parsers based on the
widely used dependency grammars find grammatical relations, such as that âJohnâ is
the subject and âappleâ is the object of the action âateâ. We mainly focus on Combinatory
Categorial Grammar (CCG) in this thesis.
In this thesis, we present an incremental algorithm for parsing CCG for two diverse
languages: English and Hindi. English is a fixed word order, SVO (Subject-Verb-
Object), and morphologically simple language, whereas, Hindi, though predominantly
a SOV (Subject-Object-Verb) language, is a free word order and morphologically rich
language. Developing an incremental parser for Hindi is really challenging since the
predicate needed to resolve dependencies comes at the end. As previously available
shift-reduce CCG parsers use English CCGbank derivations which are mostly right
branching and non-incremental, we design our algorithm based on the dependencies
resolved rather than the derivation. Our novel algorithm builds a dependency graph in
parallel to the CCG derivation which is used for revealing the unbuilt structure without
backtracking. Though we use dependencies for meaning representation and CCG for
parsing, our revealing technique can be applied to other meaning representations like
lambda expressions and for non-CCG parsing like phrase structure parsing.
Any statistical parser requires three major modules: data, parsing algorithm and
learning algorithm. This thesis is broadly divided into three parts each dealing with
one major module of the statistical parser. In Part I, we design a novel algorithm
for converting dependency treebank to CCGbank. We create Hindi CCGbank with a
decent coverage of 96% using this algorithm. We also do a cross-formalism experiment
where we show that CCG supertags can improve widely used dependency parsers.
We experiment with two popular dependency parsers (Malt and MST) for two diverse
languages: English and Hindi. For both languages, CCG categories improve the overall
accuracy of both parsers by around 0.3-0.5% in all experiments. For both parsers,
we see larger improvements specifically on dependencies at which they are known
to be weak: long distance dependencies for Malt, and verbal arguments for MST.
The result is particularly interesting in the case of the fast greedy parser (Malt), since
improving its accuracy without significantly compromising speed is relevant for large
scale applications such as parsing the web.
We present a novel algorithm for incremental transition-based CCG parsing for
English and Hindi, in Part II. Incremental parsers have potential advantages for applications
like language modeling for machine translation and speech recognition. We
introduce two new actions in the shift-reduce paradigm for revealing the required information
during parsing. We also analyze the impact of a beam and look-ahead for
parsing. In general, using a beam and/or look-ahead gives better results than not using
them. We also show that the incremental CCG parser is more useful than a non-incremental
version for predicting relative sentence complexity. Given a pair of sentences
from wikipedia and simple wikipedia, we build a classifier which predicts if one
sentence is simpler/complex than the other. We show that features from a CCG parser
in general and incremental CCG parser in particular are more useful than a chart-based
phrase structure parser both in terms of speed and accuracy.
In Part III, we develop the first neural network based training algorithm for parsing
CCG. We also study the impact of neural network based tagging models, and greedy
versus beam-search parsing, by using a structured neural network model. In greedy
settings, neural network models give significantly better results than the perceptron
models and are also over three times faster. Using a narrow beam, structured neural
network model gives consistently better results than the basic neural network model.
For English, structured neural network gives similar performance to structured perceptron
parser. But for Hindi, structured perceptron is still the winner
CCG Parsing and Multiword Expressions
This thesis presents a study about the integration of information about
Multiword Expressions (MWEs) into parsing with Combinatory Categorial Grammar
(CCG). We build on previous work which has shown the benefit of adding
information about MWEs to syntactic parsing by implementing a similar pipeline
with CCG parsing. More specifically, we collapse MWEs to one token in training
and test data in CCGbank, a corpus which contains sentences annotated with CCG
derivations. Our collapsing algorithm however can only deal with MWEs when they
form a constituent in the data which is one of the limitations of our approach.
We study the effect of collapsing training and test data. A parsing effect
can be obtained if collapsed data help the parser in its decisions and a
training effect can be obtained if training on the collapsed data improves
results. We also collapse the gold standard and show that our model
significantly outperforms the baseline model on our gold standard, which
indicates that there is a training effect. We show that the baseline model
performs significantly better on our gold standard when the data are collapsed
before parsing than when the data are collapsed after parsing which indicates
that there is a parsing effect. We show that these results can lead to improved
performance on the non-collapsed standard benchmark although we fail to show
that it does so significantly. We conclude that despite the limited settings,
there are noticeable improvements from using MWEs in parsing. We discuss ways
in which the incorporation of MWEs into parsing can be improved and hypothesize
that this will lead to more substantial results.
We finally show that turning the MWE recognition part of the pipeline into an
experimental part is a useful thing to do as we obtain different results with
different recognizers.Comment: MSc thesis, The University of Edinburgh, 2014, School of Informatics,
MSc Artificial Intelligenc
Evaluating Parsers with Dependency Constraints
Many syntactic parsers now score over 90% on English in-domain evaluation, but the remaining errors have been challenging to address and difficult to quantify. Standard parsing metrics provide a consistent basis for comparison between parsers, but do not illuminate what errors remain to be addressed. This thesis develops a constraint-based evaluation for dependency and Combinatory Categorial Grammar (CCG) parsers to address this deficiency. We examine the constrained and cascading impact, representing the direct and indirect effects of errors on parsing accuracy. This identifies errors that are the underlying source of problems in parses, compared to those which are a consequence of those problems. Kummerfeld et al. (2012) propose a static post-parsing analysis to categorise groups of errors into abstract classes, but this cannot account for cascading changes resulting from repairing errors, or limitations which may prevent the parser from applying a repair. In contrast, our technique is based on enforcing the presence of certain dependencies during parsing, whilst allowing the parser to choose the remainder of the analysis according to its grammar and model. We draw constraints for this process from gold-standard annotated corpora, grouping them into abstract error classes such as NP attachment, PP attachment, and clause attachment. By applying constraints from each error class in turn, we can examine how parsers respond when forced to correctly analyse each class. We show how to apply dependency constraints in three parsers: the graph-based MSTParser (McDonald and Pereira, 2006) and the transition-based ZPar (Zhang and Clark, 2011b) dependency parsers, and the C&C CCG parser (Clark and Curran, 2007b). Each is widely-used and influential in the field, and each generates some form of predicate-argument dependencies. We compare the parsers, identifying common sources of error, and differences in the distribution of errors between constrained and cascaded impact. Our work allows us to contrast the implementations of each parser, and how they respond to constraint application. Using our analysis, we experiment with new features for dependency parsing, which encode the frequency of proposed arcs in large-scale corpora derived from scanned books. These features are inspired by and extend on the work of Bansal and Klein (2011). We target these features at the most notable errors, and show how they address some, but not all of the difficult attachments across newswire and web text. CCG parsing is particularly challenging, as different derivations do not always generate different dependencies. We develop dependency hashing to address semantically redundant parses in n-best CCG parsing, and demonstrate its necessity and effectiveness. Dependency hashing substantially improves the diversity of n-best CCG parses, and improves a CCG reranker when used for creating training and test data. We show the intricacies of applying constraints to C&C, and describe instances where applying constraints causes the parser to produce a worse analysis. These results illustrate how algorithms which are relatively straightforward for constituency and dependency parsers are non-trivial to implement in CCG. This work has explored dependencies as constraints in dependency and CCG parsing. We have shown how dependency hashing can efficiently eliminate semantically redundant CCG n-best parses, and presented a new evaluation framework based on enforcing the presence of dependencies in the output of the parser. By otherwise allowing the parser to proceed as it would have, we avoid the assumptions inherent in other work. We hope this work will provide insights into the remaining errors in parsing, and target efforts to address those errors, creating better syntactic analysis for downstream applications
Semi-supervised lexical acquisition for wide-coverage parsing
State-of-the-art parsers suffer from incomplete lexicons, as evidenced by the fact
that they all contain built-in methods for dealing with out-of-lexicon items at parse
time. Since new labelled data is expensive to produce and no amount of it will conquer
the long tail, we attempt to address this problem by leveraging the enormous amount of
raw text available for free, and expanding the lexicon offline, with a semi-supervised
word learner. We accomplish this with a method similar to self-training, where a fully
trained parser is used to generate new parses with which the next generation of parser
is trained.
This thesis introduces Chart Inference (CI), a two-phase word-learning method
with Combinatory Categorial Grammar (CCG), operating on the level of the partial
parse as produced by a trained parser. CI uses the parsing model and lexicon to identify
the CCG category type for one unknown word in a context of known words by inferring
the type of the sentence using a model of end punctuation, then traversing the chart
from the top down, filling in each empty cell as a function of its mother and its sister.
We first specify the CI algorithm, and then compare it to two baseline wordlearning
systems over a battery of learning tasks. CI is shown to outperform the
baselines in every task, and to function in a number of applications, including grammar
acquisition and domain adaptation. This method performs consistently better than
self-training, and improves upon the standard POS-backoff strategy employed by the
baseline StatCCG parser by adding new entries to the lexicon.
The first learning task establishes lexical convergence over a toy corpus, showing
that CIâs ability to accurately model a target lexicon is more robust to initial conditions
than either of the baseline methods. We then introduce a novel natural language corpus
based on childrenâs educational materials, which is fully annotated with CCG derivations.
We use this corpus as a testbed to establish that CI is capable in principle of
recovering the whole range of category types necessary for a wide-coverage lexicon.
The complexity of the learning task is then increased, using the CCGbank corpus,
a version of the Penn Treebank, and showing that CI improves as its initial seed corpus
is increased. The next experiment uses CCGbank as the seed and attempts to recover
missing question-type categories in the TREC question answering corpus. The final
task extends the coverage of the CCGbank-trained parser by running CI over the raw
text of the Gigaword corpus. Where appropriate, a fine-grained error analysis is also
undertaken to supplement the quantitative evaluation of the parser performance with
deeper reasoning as to the linguistic points of the lexicon and parsing model
Trustworthy Formal Natural Language Specifications
Interactive proof assistants are computer programs carefully constructed to
check a human-designed proof of a mathematical claim with high confidence in
the implementation. However, this only validates truth of a formal claim, which
may have been mistranslated from a claim made in natural language. This is
especially problematic when using proof assistants to formally verify the
correctness of software with respect to a natural language specification. The
translation from informal to formal remains a challenging, time-consuming
process that is difficult to audit for correctness.
This paper shows that it is possible to build support for specifications
written in expressive subsets of natural language, within existing proof
assistants, consistent with the principles used to establish trust and
auditability in proof assistants themselves. We implement a means to provide
specifications in a modularly extensible formal subset of English, and have
them automatically translated into formal claims, entirely within the Lean
proof assistant. Our approach is extensible (placing no permanent restrictions
on grammatical structure), modular (allowing information about new words to be
distributed alongside libraries), and produces proof certificates explaining
how each word was interpreted and how the sentence's structure was used to
compute the meaning.
We apply our prototype to the translation of various English descriptions of
formal specifications from a popular textbook into Lean formalizations; all can
be translated correctly with a modest lexicon with only minor modifications
related to lexicon size.Comment: arXiv admin note: substantial text overlap with arXiv:2205.0781
Wide-coverage parsing for Turkish
Wide-coverage parsing is an area that attracts much attention in natural language processing
research. This is due to the fact that it is the first step tomany other applications
in natural language understanding, such as question answering.
Supervised learning using human-labelled data is currently the best performing
method. Therefore, there is great demand for annotated data. However, human annotation
is very expensive and always, the amount of annotated data is much less than
is needed to train well-performing parsers. This is the motivation behind making the
best use of data available. Turkish presents a challenge both because syntactically
annotated Turkish data is relatively small and Turkish is highly agglutinative, hence
unusually sparse at the whole word level.
METU-Sabancı Treebank is a dependency treebank of 5620 sentences with surface
dependency relations and morphological analyses for words. We show that including
even the crudest forms of morphological information extracted from the data boosts
the performance of both generative and discriminative parsers, contrary to received
opinion concerning English.
We induce word-based and morpheme-based CCG grammars from Turkish dependency
treebank. We use these grammars to train a state-of-the-art CCG parser that
predicts long-distance dependencies in addition to the ones that other parsers are capable
of predicting. We also use the correct CCG categories as simple features in a
graph-based dependency parser and show that this improves the parsing results.
We show that a morpheme-based CCG lexicon for Turkish is able to solve many
problems such as conflicts of semantic scope, recovering long-range dependencies,
and obtaining smoother statistics from the models. CCG handles linguistic phenomena
i.e. local and long-range dependencies more naturally and effectively than other linguistic
theories while potentially supporting semantic interpretation in parallel. Using
morphological information and a morpheme-cluster based lexicon improve the performance
both quantitatively and qualitatively for Turkish.
We also provide an improved version of the treebank which will be released by
kind permission of METU and Sabancı
- âŠ