12 research outputs found

    Learning Analogies and Semantic Relations

    Get PDF
    We present an algorithm for learning from unlabeled text, based on the Vector Space Model (VSM) of information retrieval, that can solve verbal analogy questions of the kind found in the Scholastic Aptitude Test (SAT). A verbal analogy has the form A:B::C:D, meaning "A is to B as C is to D"; for example, mason:stone::carpenter:wood. SAT analogy questions provide a word pair, A:B, and the problem is to select the most analogous word pair, C:D, from a set of five choices. The VSM algorithm correctly answers 47% of a collection of 374 college-level analogy questions (random guessing would yield 20% correct). We motivate this research by relating it to work in cognitive science and linguistics, and by applying it to a difficult problem in natural language processing, determining semantic relations in noun-modifier pairs. The problem is to classify a noun-modifier pair, such as "laser printer", according to the semantic relation between the noun (printer) and the modifier (laser). We use a supervised nearest-neighbour algorithm that assigns a class to a given noun-modifier pair by finding the most analogous noun-modifier pair in the training data. With 30 classes of semantic relations, on a collection of 600 labeled noun-modifier pairs, the learning algorithm attains an F value of 26.5% (random guessing: 3.3%). With 5 classes of semantic relations, the F value is 43.2% (random: 20%). The performance is state-of-the-art for these challenging problems

    Human-Level Performance on Word Analogy Questions by Latent Relational Analysis

    Get PDF
    This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, machine translation, and information retrieval. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason/stone is analogous to the pair carpenter/wood; the relations between mason and stone are highly similar to the relations between carpenter and wood. Past work on semantic similarity measures has mainly been concerned with attributional similarity. For instance, Latent Semantic Analysis (LSA) can measure the degree of similarity between two words, but not between two relations. Recently the Vector Space Model (VSM) of information retrieval has been adapted to the task of measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus (they are not predefined), (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data (it is also used this way in LSA), and (3) automatically generated synonyms are used to explore reformulations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying noun-modifier relations, LRA achieves similar gains over the VSM, while using a smaller corpus

    Collocation analysis for UMLS knowledge-based word sense disambiguation

    Get PDF
    BACKGROUND: The effectiveness of knowledge-based word sense disambiguation (WSD) approaches depends in part on the information available in the reference knowledge resource. Off the shelf, these resources are not optimized for WSD and might lack terms to model the context properly. In addition, they might include noisy terms which contribute to false positives in the disambiguation results. METHODS: We analyzed some collocation types which could improve the performance of knowledge-based disambiguation methods. Collocations are obtained by extracting candidate collocations from MEDLINE and then assigning them to one of the senses of an ambiguous word. We performed this assignment either using semantic group profiles or a knowledge-based disambiguation method. In addition to collocations, we used second-order features from a previously implemented approach.Specifically, we measured the effect of these collocations in two knowledge-based WSD methods. The first method, AEC, uses the knowledge from the UMLS to collect examples from MEDLINE which are used to train a Naïve Bayes approach. The second method, MRD, builds a profile for each candidate sense based on the UMLS and compares the profile to the context of the ambiguous word.We have used two WSD test sets which contain disambiguation cases which are mapped to UMLS concepts. The first one, the NLM WSD set, was developed manually by several domain experts and contains words with high frequency occurrence in MEDLINE. The second one, the MSH WSD set, was developed automatically using the MeSH indexing in MEDLINE. It contains a larger set of words and covers a larger number of UMLS semantic types. RESULTS: The results indicate an improvement after the use of collocations, although the approaches have different performance depending on the data set. In the NLM WSD set, the improvement is larger for the MRD disambiguation method using second-order features. Assignment of collocations to a candidate sense based on UMLS semantic group profiles is more effective in the AEC method.In the MSH WSD set, the increment in performance is modest for all the methods. Collocations combined with the MRD disambiguation method have the best performance. The MRD disambiguation method and second-order features provide an insignificant change in performance. The AEC disambiguation method gives a modest improvement in performance. Assignment of collocations to a candidate sense based on knowledge-based methods has better performance. CONCLUSIONS: Collocations improve the performance of knowledge-based disambiguation methods, although results vary depending on the test set and method used. Generally, the AEC method is sensitive to query drift. Using AEC, just a few selected terms provide a large improvement in disambiguation performance. The MRD method handles noisy terms better but requires a larger set of terms to improve performance

    From Frequency to Meaning: Vector Space Models of Semantics

    Full text link
    Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field

    Mejoras en la usabilidad de la web a través de una estructura complementaria

    Get PDF
    La Web ha motivado la generación de herramientas que permiten, con distintos grados de sofisticación y precisión, manipular sus contenidos. Para ello, tratan una serie de problemas, relacionados con la naturaleza imperfecta y cambiante de todas las actividades humanas. Ésta se refleja en fenómenos como las ambigüedades, contradicciones y errores de los textos almacenados. Esta tesis presenta una propuesta para complementar la administración de contenidos en la Web y de esta manera facilitar el proceso de recuperación de información. Se presenta un prototipo, denominado Web Intelligent Handler (WIH), que implementa una serie de algoritmos básicos para manipular algunas características morfosintácticas de textos en castellano y, en base a ellas, obtener una representación resumida y alternativa de su contenido. En este contexto, se define una nueva métrica de ponderación para reflejar parte de la esencia morfosintáctica de los sintagmas. Además se define un esquema de interacción entre los módulos para regular la explotación de los textos. También se explora la capacidad de los algoritmos propuestos en el tratamiento de los textos, considerándolos como una colección de sintagmas, sujeta a factores tales como contradicciones, ambigüedades y errores. Otro aporte de esta tesis es la posibilidad de evaluar matemáticamente y de manera automática tipos de estilos de texto y perfiles de escritura. Se proponen los estilos literario, técnico y mensajes. También se proponen los perfiles documento, foro de intercambio, índice Web y texto de sitio blog. Se evalúan los tres estilos y los cuatro perfiles mencionados, los que se comportan como distintos grados de una escala de estilos y perfiles, respectivamente, cuando se los evalúa con la métrica morfosintáctica aquí definida. Adicionalmente, utilizando la misma métrica, es posible realizar una valoración aproximada y automática de la calidad de cualquier tipo de texto. Esta calificación resulta ser invariante a la cantidad de palabras, temática y perfil, pero relacionada con el estilo del escrito en cuestión.The Web motivated a set of tools for content handling with several levels of sophistication and precision. To do so, they deal with many unsolved problems in saved texts. All of them are related to the mutable and imperfect essence of human beings such as ambiguities, contradictions and misspellings. This theses presents a proposal to complement the Web content management and therefore to provide support to the information retrieval activity. A prototype named Web Intelligent Handler (WIH) is introduced to implement a set of algorithms that manage some morpho-syntactical features in Spanish texts. These features are also used to get a brief and alternate representation of its content. Within this framework, a new weighting metric is designed to reflect part of the syntagm morpho-syntactical essence. A module interaction approach is also outlined to rule the text processing output. Besides, this thesis analyzes the algorithms ability to handle texts considering them as a collection of syntagms affected by certain factors such as contradictions, ambiguities and misspellings. Perhaps, the main contribution of this thesis is the possibility to automatically mathematical evaluation of text styles and profiles. Three initial three styles are proposed here: literary, technical and message. Furthermore, the following writer profiles are proposed also: document, foro, Web-index and blog. All the three styles and four profiles were evaluated. They behave respectively as a part of a graduated scale of styles and profiles when the morpho-syntactical metric defined here is used. It is also possible to perform a kind of automatic rough text quality valuation. This is invariant to the text word quantity, topic and profile, but it is related to its style.Facultad de Informátic

    The Descent of Hierarchy, and Selection in Relational Semantics

    No full text
    In many types of technical texts, meaning is embedded in noun compounds. A language understanding program needs to be able to interpret these in order to ascertain sentence meaning
    corecore