78,306 research outputs found

    A Review Paper on Various Search Engines (Google, Yahoo, Altavista, Ask and Bing)

    Get PDF
    Search engines are used in the web as a tool for information retrieval. As the web is a huge repository of heterogeneous and unstructured data so to filter out relevant information from unnecessary ones search engines are needed. Search engines usually consists of crawling module, page repository, indexing module, querying module and ranking module. The inter communication between these modules describes the working methodology of a search engine. This paper aims to focus on the comparative analysis of five major search engines i.e Google, Yahoo, Altavista, Ask and Bing in a tabular form based on some features. The features include search operator, search web, search images, search videos, search news, search maps, search books, advance search, change background, change search settings, display number of results, shopping, translation services, multi-language support, questions/answers, directory, advertising programs, business services, themes, case sensitive, finance, safe search, search pad, career and preferences. Google stands out as the best search engine amongst all search engines, which works on Page Rank algorithm. Page Rank is a numeric value which determines the importance of a web page by calculating the number of backlinks

    Limitations of Cross-Lingual Learning from Image Search

    Full text link
    Cross-lingual representation learning is an important step in making NLP scale to all the world's languages. Recent work on bilingual lexicon induction suggests that it is possible to learn cross-lingual representations of words based on similarities between images associated with these words. However, that work focused on the translation of selected nouns only. In our work, we investigate whether the meaning of other parts-of-speech, in particular adjectives and verbs, can be learned in the same way. We also experiment with combining the representations learned from visual data with embeddings learned from textual data. Our experiments across five language pairs indicate that previous work does not scale to the problem of learning cross-lingual representations beyond simple nouns

    PLuTO: MT for online patent translation

    Get PDF
    PLuTO – Patent Language Translation Online – is a partially EU-funded commercialization project which specializes in the automatic retrieval and translation of patent documents. At the core of the PLuTO framework is a machine translation (MT) engine through which web-based translation services are offered. The fully integrated PLuTO architecture includes a translation engine coupling MT with translation memories (TM), and a patent search and retrieval engine. In this paper, we first describe the motivating factors behind the provision of such a service. Following this, we give an overview of the PLuTO framework as a whole, with particular emphasis on the MT components, and provide a real world use case scenario in which PLuTO MT services are exploited

    MATREX: the DCU MT system for WMT 2009

    Get PDF
    In this paper, we describe the machine translation system in the evaluation campaign of the Fourth Workshop on Statistical Machine Translation at EACL 2009. We describe the modular design of our multi-engine MT system with particular focus on the components used in this participation. We participated in the translation task for the following translation directions: French–English and English–French, in which we employed our multi-engine architecture to translate. We also participated in the system combination task which was carried out by the MBR decoder and Confusion Network decoder. We report results on the provided development and test sets

    An Empirical Analysis of NMT-Derived Interlingual Embeddings and their Use in Parallel Sentence Identification

    Get PDF
    End-to-end neural machine translation has overtaken statistical machine translation in terms of translation quality for some language pairs, specially those with large amounts of parallel data. Besides this palpable improvement, neural networks provide several new properties. A single system can be trained to translate between many languages at almost no additional cost other than training time. Furthermore, internal representations learned by the network serve as a new semantic representation of words -or sentences- which, unlike standard word embeddings, are learned in an essentially bilingual or even multilingual context. In view of these properties, the contribution of the present work is two-fold. First, we systematically study the NMT context vectors, i.e. output of the encoder, and their power as an interlingua representation of a sentence. We assess their quality and effectiveness by measuring similarities across translations, as well as semantically related and semantically unrelated sentence pairs. Second, as extrinsic evaluation of the first point, we identify parallel sentences in comparable corpora, obtaining an F1=98.2% on data from a shared task when using only NMT context vectors. Using context vectors jointly with similarity measures F1 reaches 98.9%.Comment: 11 pages, 4 figure
    corecore