5 research outputs found

    NLization of Nouns, Pronouns and Prepositions in Punjabi With EUGENE

    Get PDF
    Abstract Universal Networking Language (UNL) has been used by various researchers as an Interlingua approach for AMT (Automatic machine translation). The UNL system consists of two main components/tools, namely, EnConverter-IAN (used for converting the text from a source language to UNL) and DeConverter -EUGENE (used for converting the text from UNL to a target language). This paper highlights the DeConversion generation rules used for the DeConverter and indicates its usage in the generation of Punjabi sentences. This paper also covers the results of implementation of UNL input by using DeConverter-EUGENE and its evaluation on UNL sentences such as Nouns, Pronouns and Prepositions

    Amazigh Representation in the UNL Framework: Resource Implementation

    Get PDF
    AbstractThis paper discusses the first steps undertaken to create necessary linguistic resources to incorporate Amazigh language within the Universal Networking Language (UNL) framework for machine translation purpose. This universal interlanguage allows to any source text to be translated into different other related languages with UNL by converting the meaning of the source text into semantic graph. This encoding is considered as a pivot interlanguage used in translation systems. Thus in this work, we focus on presenting morphological, syntactical and lexical mapping stages needed for building an “Amazigh dictionary” according to the UNL framework and the “UNL-Amazigh Dictionary” that are both taking part in enconversion and deconversion processes

    Getting Past the Language Gap: Innovations in Machine Translation

    Get PDF
    In this chapter, we will be reviewing state of the art machine translation systems, and will discuss innovative methods for machine translation, highlighting the most promising techniques and applications. Machine translation (MT) has benefited from a revitalization in the last 10 years or so, after a period of relatively slow activity. In 2005 the field received a jumpstart when a powerful complete experimental package for building MT systems from scratch became freely available as a result of the unified efforts of the MOSES international consortium. Around the same time, hierarchical methods had been introduced by Chinese researchers, which allowed the introduction and use of syntactic information in translation modeling. Furthermore, the advances in the related field of computational linguistics, making off-the-shelf taggers and parsers readily available, helped give MT an additional boost. Yet there is still more progress to be made. For example, MT will be enhanced greatly when both syntax and semantics are on board: this still presents a major challenge though many advanced research groups are currently pursuing ways to meet this challenge head-on. The next generation of MT will consist of a collection of hybrid systems. It also augurs well for the mobile environment, as we look forward to more advanced and improved technologies that enable the working of Speech-To-Speech machine translation on hand-held devices, i.e. speech recognition and speech synthesis. We review all of these developments and point out in the final section some of the most promising research avenues for the future of MT

    Getting Past the Language Gap: Innovations in Machine Translation

    Get PDF
    In this chapter, we will be reviewing state of the art machine translation systems, and will discuss innovative methods for machine translation, highlighting the most promising techniques and applications. Machine translation (MT) has benefited from a revitalization in the last 10 years or so, after a period of relatively slow activity. In 2005 the field received a jumpstart when a powerful complete experimental package for building MT systems from scratch became freely available as a result of the unified efforts of the MOSES international consortium. Around the same time, hierarchical methods had been introduced by Chinese researchers, which allowed the introduction and use of syntactic information in translation modeling. Furthermore, the advances in the related field of computational linguistics, making off-the-shelf taggers and parsers readily available, helped give MT an additional boost. Yet there is still more progress to be made. For example, MT will be enhanced greatly when both syntax and semantics are on board: this still presents a major challenge though many advanced research groups are currently pursuing ways to meet this challenge head-on. The next generation of MT will consist of a collection of hybrid systems. It also augurs well for the mobile environment, as we look forward to more advanced and improved technologies that enable the working of Speech-To-Speech machine translation on hand-held devices, i.e. speech recognition and speech synthesis. We review all of these developments and point out in the final section some of the most promising research avenues for the future of MT
    corecore