83,024 research outputs found
Negation in Modern Standard Arabic: An LFG approach
Modern Standard Arabic (MSA) uses five different particles to express sentential negation: the invariant particle maa, the particle laa and its tensed counterparts lam (PAST) and lan (FUT), and laysa which is marked only for SUBJ agreement. Partial analyses of these elements are offered in other frameworks, notably Minimalism (Shlonsky, 1997; Benmamoun, 2000), but have not to date received an analysis within LFG. We propose an approach to four of these particles: the fifth one, namely maa, raises a number of additional issues and we leave it to one side for reasons of space. laa, lam, lan show distinctions of TENSE, occur only with imperfective forms of the verb (excluding the perfective) and must immediately precede the verb itself. They are limited to occurrence in verbal sentences. We propose that the adjacency requirement follows from the fact that these negative particles are non-projecting words adjoined to the (imperfective) V. On the other hand, laysa is a fully verbal element, and is thus a negative verb, occurring only with present tense interpretation
Solving headswitching translation cases in LFG-DOT
It has been shown that LFG-MT (Kaplan et al., 1989) has difficulties with Headswitching data (Sadler et al., 1989, 1990; Sadler & Thompson, 1991). We revisit these arguments in this paper. Despite attempts at solving these problematic constructions using approaches based on linear logic (Van Genabith et al., 1998) and restriction (Kaplan & Wedekind, 1993), we point out further problems which are introduced.
We then show how LFG-DOP (Bod & Kaplan, 1998) can be extended to serve as a novel hybrid model for MT, LFG-DOT (Way, 1999, 2001), which promises to improve upon the DOT model of translation (Poutsma 1998, 2000) as well as LFG-MT. LFG-DOT improves the robustness of LFG-MT through the use of the LFG-DOP Discard operator, which produces generalized fragments by discarding certain f-structure features. LFG-DOT can, therefore, deal with ill-formed or previously unseen input where LFG-MT cannot. Finally, we demonstrate that LFG-DOT can cope with such translational phenomena which prove problematic for other LFG-based models of translation
- âŚ