4,785 research outputs found
Capturing translational divergences with a statistical tree-to-tree aligner
Parallel treebanks, which comprise paired source-target parse trees aligned at sub-sentential level, could be useful
for many applications, particularly data-driven machine translation. In this paper, we focus on how translational
divergences are captured within a parallel treebank using a fully automatic statistical tree-to-tree aligner. We
observe that while the algorithm performs well at the phrase level, performance on lexical-level alignments
is compromised by an inappropriate bias towards coverage rather than precision. This preference for high precision
rather than broad coverage in terms of expressing translational divergences through tree-alignment stands in
direct opposition to the situation for SMT word-alignment models. We suggest that this has implications not only
for tree-alignment itself but also for the broader area of induction of syntaxaware models for SMT
Active learning and the Irish treebank
We report on our ongoing work in developing the Irish Dependency Treebank, describe the results of two Inter annotator Agreement (IAA) studies, demonstrate improvements in annotation consistency which have a knock-on effect on parsing accuracy, and present the final set of dependency labels. We then go on to investigate the extent to which active learning can play a role in treebank and parser development by comparing an active learning bootstrapping approach to a passive approach in which sentences are chosen at random for manual revision. We show that active learning outperforms passive learning, but when annotation effort is taken into account, it is not clear how much of an advantage the active learning approach has. Finally, we present results which suggest that adding automatic parses to the training data along with manually revised parses in an active learning setup does not greatly affect parsing accuracy
Irish treebanking and parsing: a preliminary evaluation
Language resources are essential for linguistic research and the development of NLP applications. Low- density languages, such as Irish, therefore lack significant research in this area. This paper describes the early stages in the development of new language resources for Irish – namely the first Irish dependency treebank and the first Irish statistical dependency parser. We present the methodology behind building our new treebank and the steps we take to leverage upon the few existing resources. We discuss language specific choices made when defining our dependency labelling scheme, and describe interesting Irish language characteristics such as prepositional attachment, copula and clefting. We manually develop a small treebank of 300 sentences based on an existing POS-tagged corpus and report an inter-annotator agreement of 0.7902. We train MaltParser to achieve preliminary parsing results for Irish and describe a bootstrapping approach for further stages of development
A Maximum-Entropy Partial Parser for Unrestricted Text
This paper describes a partial parser that assigns syntactic structures to
sequences of part-of-speech tags. The program uses the maximum entropy
parameter estimation method, which allows a flexible combination of different
knowledge sources: the hierarchical structure, parts of speech and phrasal
categories. In effect, the parser goes beyond simple bracketing and recognises
even fairly complex structures. We give accuracy figures for different
applications of the parser.Comment: 9 pages, LaTe
Chunk Tagger - Statistical Recognition of Noun Phrases
We describe a stochastic approach to partial parsing, i.e., the recognition
of syntactic structures of limited depth. The technique utilises Markov Models,
but goes beyond usual bracketing approaches, since it is capable of recognising
not only the boundaries, but also the internal structure and syntactic category
of simple as well as complex NP's, PP's, AP's and adverbials. We compare
tagging accuracy for different applications and encoding schemes.Comment: 7 pages, LaTe
Statistical mechanics of ontology based annotations
We present a statistical mechanical theory of the process of annotating an
object with terms selected from an ontology. The term selection process is
formulated as an ideal lattice gas model, but in a highly structured
inhomogeneous field. The model enables us to explain patterns recently observed
in real-world annotation data sets, in terms of the underlying graph structure
of the ontology. By relating the external field strengths to the information
content of each node in the ontology graph, the statistical mechanical model
also allows us to propose a number of practical metrics for assessing the
quality of both the ontology, and the annotations that arise from its use.
Using the statistical mechanical formalism we also study an ensemble of
ontologies of differing size and complexity; an analysis not readily performed
using real data alone. Focusing on regular tree ontology graphs we uncover a
rich set of scaling laws describing the growth in the optimal ontology size as
the number of objects being annotated increases. In doing so we provide a
further possible measure for assessment of ontologies.Comment: 27 pages, 5 figure
- …