5,178 research outputs found

    From Frequency to Meaning: Vector Space Models of Semantics

    Full text link
    Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field

    HSAS: Hindi Subjectivity Analysis System

    Get PDF
    With the development of Web 2.0, we are abundant with the documents expressing user's opinions, attitudes and sentiments in the textual form. This user generated textual content is an important source of information to make sound decisions by the organizations and the government. The textual information can be categorized into two types: facts and opinions. Subjectivity analysis is the automatic extraction of subjective information from the opinions posted by users and divides the content into subjective and objective sentences. Most of the works in subjectivity analysis exists for English language data but with the introduction of unicode standards UTF-8, Hindi language content on the web is growing very rapidly. In this paper, Hindi Subjectivity Analysis System (HSAS) is proposed. It explores two different methods of generating subjectivity lexicon using the available resources in English language and their comparative evaluation in performing the task of subjectivity analysis at the sentence level. The first method uses English language OpinionFinder subjectivity lexicon. The second method uses a small seed word list of Hindi language and expands it to generate subjectivity lexicon. Different evaluation strategies are used to validate the lexicon. We achieved 71.4% agreement with human annotators and ~80% accuracy in classification on a parallel data set in English and Hindi. Extensive simulations conducted on the test dataset confirm the validity of the suggested method

    Translating Common English and Chinese Verb-Noun Pairs in Technical Documents with Collocational and Bilingual Information

    Get PDF

    The interaction of knowledge sources in word sense disambiguation

    Get PDF
    Word sense disambiguation (WSD) is a computational linguistics task likely to benefit from the tradition of combining different knowledge sources in artificial in telligence research. An important step in the exploration of this hypothesis is to determine which linguistic knowledge sources are most useful and whether their combination leads to improved results. We present a sense tagger which uses several knowledge sources. Tested accuracy exceeds 94% on our evaluation corpus.Our system attempts to disambiguate all content words in running text rather than limiting itself to treating a restricted vocabulary of words. It is argued that this approach is more likely to assist the creation of practical systems

    WordNet: An Electronic Lexical Reference System Based on Theories of Lexical Memory

    Get PDF
    Cet article fait la description de WordNet, systĂšme de rĂ©fĂ©rence Ă©lectronique, dont le dessin est basĂ© sur des thĂ©ories psycholinguistiques concernant la mĂ©moire lexicale et l’organisation mentale des mots.Les noms, les verbes et les adjectifs anglais sont organisĂ©s en groupes synonymes (les « synsets »), chacun reprĂ©sentant un concept lexical. Trois relations principales — l’hyponymie, la mĂ©ronymie et l’antonymie — servent Ă  Ă©tablir les rapports conceptuels entre les « synsets ». Les prĂ©suppositions qui lient les verbes sont indiquĂ©es ainsi que leurs contextes syntaxiques et sĂ©mantiques.En tĂąchant de miroiter l’organisation mentale des concepts lexicaux, WordNet pourrait servir l’utilisateur sans formation en linguistique.This paper describes WordNet, an on-line lexical reference system whose design is based on psycholinguistic theories of human lexical organization and memory.English nouns, verbs, and adjectives are organized into synonym sets, each representing one underlying lexical concept. Synonym sets are then related via three principal conceptual relations: hyponymy, meronymy, and antonymy. Verbs are additionally specified for presupposition relations that hold among them, and for their most common semantic/syntactic frames.By attempting to mirror the organization of the mental lexicon, WordNet strives to serve the linguistically unsophisticated user

    Supervised and unsupervised methods for learning representations of linguistic units

    Get PDF
    Word representations, also called word embeddings, are generic representations, often high-dimensional vectors. They map the discrete space of words into a continuous vector space, which allows us to handle rare or even unseen events, e.g. by considering the nearest neighbors. Many Natural Language Processing tasks can be improved by word representations if we extend the task specific training data by the general knowledge incorporated in the word representations. The first publication investigates a supervised, graph-based method to create word representations. This method leads to a graph-theoretic similarity measure, CoSimRank, with equivalent formalizations that show CoSimRank’s close relationship to Personalized Page-Rank and SimRank. The new formalization is efficient because it can use the graph-based word representation to compute a single node similarity without having to compute the similarities of the entire graph. We also show how we can take advantage of fast matrix multiplication algorithms. In the second publication, we use existing unsupervised methods for word representation learning and combine these with semantic resources by learning representations for non-word objects like synsets and entities. We also investigate improved word representations which incorporate the semantic information from the resource. The method is flexible in that it can take any word representations as input and does not need an additional training corpus. A sparse tensor formalization guarantees efficiency and parallelizability. In the third publication, we introduce a method that learns an orthogonal transformation of the word representation space that focuses the information relevant for a task in an ultradense subspace of a dimensionality that is smaller by a factor of 100 than the original space. We use ultradense representations for a Lexicon Creation task in which words are annotated with three types of lexical information – sentiment, concreteness and frequency. The final publication introduces a new calculus for the interpretable ultradense subspaces, including polarity, concreteness, frequency and part-of-speech (POS). The calculus supports operations like “−1 × hate = love” and “give me a neutral word for greasy” (i.e., oleaginous) and extends existing analogy computations like “king − man + woman = queen”.WortreprĂ€sentationen, sogenannte Word Embeddings, sind generische ReprĂ€sentationen, meist hochdimensionale Vektoren. Sie bilden den diskreten Raum der Wörter in einen stetigen Vektorraum ab und erlauben uns, seltene oder ungesehene Ereignisse zu behandeln -- zum Beispiel durch die Betrachtung der nĂ€chsten Nachbarn. Viele Probleme der Computerlinguistik können durch WortreprĂ€sentationen gelöst werden, indem wir spezifische Trainingsdaten um die allgemeinen Informationen erweitern, welche in den WortreprĂ€sentationen enthalten sind. In der ersten Publikation untersuchen wir ĂŒberwachte, graphenbasierte Methodenn um WortreprĂ€sentationen zu erzeugen. Diese Methoden fĂŒhren zu einem graphenbasierten Ähnlichkeitsmaß, CoSimRank, fĂŒr welches zwei Ă€quivalente Formulierungen existieren, die sowohl die enge Beziehung zum personalisierten PageRank als auch zum SimRank zeigen. Die neue Formulierung kann einzelne KnotenĂ€hnlichkeiten effektiv berechnen, da graphenbasierte WortreprĂ€sentationen benutzt werden können. In der zweiten Publikation verwenden wir existierende WortreprĂ€sentationen und kombinieren diese mit semantischen Ressourcen, indem wir ReprĂ€sentationen fĂŒr Objekte lernen, welche keine Wörter sind, wie zum Beispiel Synsets und EntitĂ€ten. Die FlexibilitĂ€t unserer Methode zeichnet sich dadurch aus, dass wir beliebige WortreprĂ€sentationen als Eingabe verwenden können und keinen zusĂ€tzlichen Trainingskorpus benötigen. In der dritten Publikation stellen wir eine Methode vor, die eine Orthogonaltransformation des Vektorraums der WortreprĂ€sentationen lernt. Diese Transformation fokussiert relevante Informationen in einen ultra-kompakten Untervektorraum. Wir benutzen die ultra-kompakten ReprĂ€sentationen zur Erstellung von WörterbĂŒchern mit drei verschiedene Angaben -- Stimmung, Konkretheit und HĂ€ufigkeit. Die letzte Publikation prĂ€sentiert eine neue Rechenmethode fĂŒr die interpretierbaren ultra-kompakten UntervektorrĂ€ume -- Stimmung, Konkretheit, HĂ€ufigkeit und Wortart. Diese Rechenmethode beinhaltet Operationen wie ”−1 × Hass = Liebe” und ”neutrales Wort fĂŒr Winkeladvokat” (d.h., Anwalt) und erweitert existierende Rechenmethoden, wie ”Onkel − Mann + Frau = Tante”

    Using Bilingual Semantic Information in Chinese-Korean Word Alignment

    Get PDF

    Improving the translation environment for professional translators

    Get PDF
    When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side. This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project
    • 

    corecore