821,771 research outputs found

    Limitations of Cross-Lingual Learning from Image Search

    Full text link
    Cross-lingual representation learning is an important step in making NLP scale to all the world's languages. Recent work on bilingual lexicon induction suggests that it is possible to learn cross-lingual representations of words based on similarities between images associated with these words. However, that work focused on the translation of selected nouns only. In our work, we investigate whether the meaning of other parts-of-speech, in particular adjectives and verbs, can be learned in the same way. We also experiment with combining the representations learned from visual data with embeddings learned from textual data. Our experiments across five language pairs indicate that previous work does not scale to the problem of learning cross-lingual representations beyond simple nouns

    SL(2,R)/U(1) Supercoset and Elliptic Genera of Non-compact Calabi-Yau Manifolds

    Full text link
    We first discuss the relationship between the SL(2;R)/U(1) supercoset and N=2 Liouville theory and make a precise correspondence between their representations. We shall show that the discrete unitary representations of SL(2;R)/U(1) theory correspond exactly to those massless representations of N=2 Liouville theory which are closed under modular transformations and studied in our previous work hep-th/0311141. It is known that toroidal partition functions of SL(2;R)/U(1) theory (2D Black Hole) contain two parts, continuous and discrete representations. The contribution of continuous representations is proportional to the space-time volume and is divergent in the infinite-volume limit while the part of discrete representations is volume-independent. In order to see clearly the contribution of discrete representations we consider elliptic genus which projects out the contributions of continuous representations: making use of the SL(2;R)/U(1), we compute elliptic genera for various non-compact space-times such as the conifold, ALE spaces, Calabi-Yau 3-folds with A_n singularities etc. We find that these elliptic genera in general have a complex modular property and are not Jacobi forms as opposed to the cases of compact Calabi-Yau manifolds.Comment: 39 pages, no figure; v2 references added, minor corrections; v3 typos corrected, to appear in JHEP; v4 typos corrected in eqs. (3.22) and (3.44

    The Latent Relation Mapping Engine: Algorithm and Experiments

    Full text link
    Many AI researchers and cognitive scientists have argued that analogy is the core of cognition. The most influential work on computational modeling of analogy-making is Structure Mapping Theory (SMT) and its implementation in the Structure Mapping Engine (SME). A limitation of SME is the requirement for complex hand-coded representations. We introduce the Latent Relation Mapping Engine (LRME), which combines ideas from SME and Latent Relational Analysis (LRA) in order to remove the requirement for hand-coded representations. LRME builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. We evaluate LRME on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. LRME achieves human-level performance on the twenty problems. We compare LRME with a variety of alternative approaches and find that they are not able to reach the same level of performance.Comment: related work available at http://purl.org/peter.turney
    corecore