657 research outputs found
A syntactified direct translation model with linear-time decoding
Recent syntactic extensions of statistical translation models work with a synchronous context-free or tree-substitution grammar extracted from an automatically parsed parallel corpus. The decoders accompanying these extensions typically exceed quadratic time complexity. This paper extends the Direct Translation Model 2 (DTM2) with syntax while maintaining linear-time decoding. We employ a linear-time parsing algorithm based on an eager, incremental interpretation of Combinatory Categorial Grammar
(CCG). As every input word is processed, the local parsing decisions resolve ambiguity eagerly, by selecting a single
supertagâoperator pair for extending the dependency parse incrementally. Alongside translation features extracted from
the derived parse tree, we explore syntactic features extracted from the incremental derivation process. Our empirical experiments show that our model significantly
outperforms the state-of-the art DTM2 system
Supertagged phrase-based statistical machine translation
Until quite recently, extending Phrase-based Statistical Machine Translation (PBSMT) with syntactic structure caused system performance to deteriorate. In this work we show that incorporating lexical syntactic descriptions in the form of supertags can yield significantly better PBSMT systems. We describe a novel PBSMT model that integrates
supertags into the target language model and the target side of the translation model. Two kinds of supertags are employed: those from Lexicalized Tree-Adjoining Grammar
and Combinatory Categorial Grammar. Despite the differences between these two approaches, the supertaggers give similar improvements. In addition to supertagging, we also explore the utility of a surface global grammaticality measure based on combinatory operators. We perform various experiments on the Arabic to English NIST 2005 test set addressing issues such as sparseness, scalability and the utility of system subcomponents. Our best result (0.4688 BLEU) improves by 6.1% relative to a state-of-theart
PBSMT model, which compares very favourably with the leading systems on the NIST 2005 task
Efficient Normal-Form Parsing for Combinatory Categorial Grammar
Under categorial grammars that have powerful rules like composition, a simple
n-word sentence can have exponentially many parses. Generating all parses is
inefficient and obscures whatever true semantic ambiguities are in the input.
This paper addresses the problem for a fairly general form of Combinatory
Categorial Grammar, by means of an efficient, correct, and easy to implement
normal-form parsing technique. The parser is proved to find exactly one parse
in each semantic equivalence class of allowable parses; that is, spurious
ambiguity (as carefully defined) is shown to be both safely and completely
eliminated.Comment: 8 pages, LaTeX packaged with three .sty files, also uses cgloss4e.st
A syntactic language model based on incremental CCG parsing
Syntactically-enriched language models (parsers) constitute a promising component in applications such as machine translation and speech-recognition. To maintain a useful level of accuracy, existing parsers are non-incremental and must span a combinatorially growing space of possible structures as every input word is processed. This prohibits their incorporation into standard linear-time decoders. In this paper, we present an incremental, linear-time dependency parser based on Combinatory Categorial Grammar (CCG) and classification techniques. We devise a deterministic transform of CCGbank canonical derivations into incremental ones, and train our parser on this data. We discover that a cascaded, incremental version provides an appealing balance between efficiency and accuracy
Learning the Semantics of Manipulation Action
In this paper we present a formal computational framework for modeling
manipulation actions. The introduced formalism leads to semantics of
manipulation action and has applications to both observing and understanding
human manipulation actions as well as executing them with a robotic mechanism
(e.g. a humanoid robot). It is based on a Combinatory Categorial Grammar. The
goal of the introduced framework is to: (1) represent manipulation actions with
both syntax and semantic parts, where the semantic part employs
-calculus; (2) enable a probabilistic semantic parsing schema to learn
the -calculus representation of manipulation action from an annotated
action corpus of videos; (3) use (1) and (2) to develop a system that visually
observes manipulation actions and understands their meaning while it can reason
beyond observations using propositional logic and axiom schemata. The
experiments conducted on a public available large manipulation action dataset
validate the theoretical framework and our implementation
Disambiguation of Super Parts of Speech (or Supertags): Almost Parsing
In a lexicalized grammar formalism such as Lexicalized Tree-Adjoining Grammar
(LTAG), each lexical item is associated with at least one elementary structure
(supertag) that localizes syntactic and semantic dependencies. Thus a parser
for a lexicalized grammar must search a large set of supertags to choose the
right ones to combine for the parse of the sentence. We present techniques for
disambiguating supertags using local information such as lexical preference and
local lexical dependencies. The similarity between LTAG and Dependency grammars
is exploited in the dependency model of supertag disambiguation. The
performance results for various models of supertag disambiguation such as
unigram, trigram and dependency-based models are presented.Comment: ps file. 8 page
Treebank-based acquisition of wide-coverage, probabilistic LFG resources: project overview, results and evaluation
This paper presents an overview of a project to acquire wide-coverage, probabilistic Lexical-Functional Grammar
(LFG) resources from treebanks. Our approach is based on an automatic annotation algorithm that annotates ârawâ treebank trees with LFG f-structure information approximating to basic predicate-argument/dependency structure. From the f-structure-annotated treebank
we extract probabilistic unification grammar resources. We present the annotation algorithm, the extraction of
lexical information and the acquisition of wide-coverage and robust PCFG-based LFG approximations including
long-distance dependency resolution.
We show how the methodology can be applied to multilingual, treebank-based unification grammar acquisition. Finally
we show how simple (quasi-)logical forms can be derived automatically from the f-structures generated for the treebank trees
- âŠ