15,590 research outputs found

    Filling Knowledge Gaps in a Broad-Coverage Machine Translation System

    Full text link
    Knowledge-based machine translation (KBMT) techniques yield high quality in domains with detailed semantic models, limited vocabulary, and controlled input grammar. Scaling up along these dimensions means acquiring large knowledge resources. It also means behaving reasonably when definitive knowledge is not yet available. This paper describes how we can fill various KBMT knowledge gaps, often using robust statistical techniques. We describe quantitative and qualitative results from JAPANGLOSS, a broad-coverage Japanese-English MT system.Comment: 7 pages, Compressed and uuencoded postscript. To appear: IJCAI-9

    Early occipital sensitivity to syntactic category is based on form typicality

    Get PDF
    Syntactic factors can rapidly affect behavioral and neural responses during language processing; however, the mechanisms that allow this rapid extraction of syntactically relevant information remain poorly understood. We addressed this issue using magnetoencephalography and found that an unexpected word category (e.g., “The recently princess . . . ”) elicits enhanced activity in visual cortex as early as 120 ms after exposure, and that this activity occurs as a function of the compatibility of a word’s form with the form properties associated with a predicted word category. Because no sensitivity to linguistic factors has been previously reported for words in isolation at this stage of visual analysis, we propose that predictions about upcoming syntactic categories are translated into form-based estimates, which are made available to sensory cortices. This finding may be a key component to elucidating the mechanisms that allow the extreme rapidity and efficiency of language comprehension

    Textual Economy through Close Coupling of Syntax and Semantics

    Get PDF
    We focus on the production of efficient descriptions of objects, actions and events. We define a type of efficiency, textual economy, that exploits the hearer's recognition of inferential links to material elsewhere within a sentence. Textual economy leads to efficient descriptions because the material that supports such inferences has been included to satisfy independent communicative goals, and is therefore overloaded in Pollack's sense. We argue that achieving textual economy imposes strong requirements on the representation and reasoning used in generating sentences. The representation must support the generator's simultaneous consideration of syntax and semantics. Reasoning must enable the generator to assess quickly and reliably at any stage how the hearer will interpret the current sentence, with its (incomplete) syntax and semantics. We show that these representational and reasoning requirements are met in the SPUD system for sentence planning and realization.Comment: 10 pages, uses QobiTree.te
    • 

    corecore