8 research outputs found

    Extended Vector Space Model with Semantic Relatedness on Java Archive Search Engine

    Full text link
    Byte code as information source is a novel approach which enable Java archive search engine to be built without relying on another resources except the Java archive itself [1]. Unfortunately, its effectiveness is not considerably high since some relevant documents may not be retrieved because of vocabulary mismatch. In this research, a vector space model (VSM) is extended with semantic relatedness to overcome vocabulary mismatch issue in Java archive search engine. Aiming the most effective retrieval model, some sort of equations in retrieval models are also proposed and evaluated such as sum up all related term, substituting non-existing term with most related term, logaritmic normalization, context-specific relatedness, and low-rank query-related retrieved documents. In general, semantic relatedness improves recall as a tradeoff of its precision reduction. We also proposed a scheme to take the advantage of relatedness without affected by its disadvantage (VSM + considering non-retrieved documents as low-rank retrieved documents using semantic relatedness). This scheme assures that relatedness score should be ranked lower than standard exact-match score. This scheme yields 1.754% higher effectiveness than our standard VSM

    From Frequency to Meaning: Vector Space Models of Semantics

    Full text link
    Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field

    The Word-Space Model: using distributional analysis to represent syntagmatic and paradigmatic relations between words in high-dimensional vector spaces

    Get PDF
    The word-space model is a computational model of word meaning that utilizes the distributional patterns of words collected over large text data to represent semantic similarity between words in terms of spatial proximity. The model has been used for over a decade, and has demonstrated its mettle in numerous experiments and applications. It is now on the verge of moving from research environments to practical deployment in commercial systems. Although extensively used and intensively investigated, our theoretical understanding of the word-space model remains unclear. The question this dissertation attempts to answer is: what kind of semantic information does the word-space model acquire and represent? The answer is derived through an identification and discussion of the three main theoretical cornerstones of the word-space model: the geometric metaphor of meaning, the distributional methodology, and the structuralist meaning theory. It is argued that the word-space model acquires and represents two different types of relations between words – syntagmatic and paradigmatic relations – depending on how the distributional patterns of words are used to accumulate word spaces. The difference between syntagmatic and paradigmatic word spaces is empirically demonstrated in a number of experiments, including comparisons with thesaurus entries, association norms, a synonym test, a list of antonym pairs, and a record of part-of-speech assignments.För att köpa boken skicka en beställning till [email protected]/ To order the book send an e-mail to [email protected]

    Constructing semantic space models from parsed corpora

    No full text
    Traditional vector-based models use word co-occurrence counts from large corpora to represent lexical meaning. In this paper we present a novel approach for constructing semantic spaces that takes syntactic relations into account. We introduce a formalisation for this class of models and evaluate their adequacy on two modelling tasks: semantic priming and automatic discrimination of lexical relations

    Exploiting Cross-Lingual Representations For Natural Language Processing

    Get PDF
    Traditional approaches to supervised learning require a generous amount of labeled data for good generalization. While such annotation-heavy approaches have proven useful for some Natural Language Processing (NLP) tasks in high-resource languages (like English), they are unlikely to scale to languages where collecting labeled data is di cult and time-consuming. Translating supervision available in English is also not a viable solution, because developing a good machine translation system requires expensive to annotate resources which are not available for most languages. In this thesis, I argue that cross-lingual representations are an effective means of extending NLP tools to languages beyond English without resorting to generous amounts of annotated data or expensive machine translation. These representations can be learned in an inexpensive manner, often from signals completely unrelated to the task of interest. I begin with a review of different ways of inducing such representations using a variety of cross-lingual signals and study algorithmic approaches of using them in a diverse set of downstream tasks. Examples of such tasks covered in this thesis include learning representations to transfer a trained model across languages for document classification, assist in monolingual lexical semantics like word sense induction, identify asymmetric lexical relationships like hypernymy between words in different languages, or combining supervision across languages through a shared feature space for cross-lingual entity linking. In all these applications, the representations make information expressed in other languages available in English, while requiring minimal additional supervision in the language of interest

    Meaning in Distributions : A Study on Computational Methods in Lexical Semantics

    Get PDF
    This study investigates the connection between lexical items' distributions and their meanings from the perspective of computational distributional operations. When applying computational methods in meaning-related research, it is customary to refer to the so-called distributional hypothesis, according to which differences in distributions and meanings are mutually correlated. However, making use of such a hypothesis requires critical explication of the concept of distribution and plausible arguments for why any particular distributional structure is connected to a particular meaning-related phenomenon. In broad strokes, the present study seeks to chart the major differences in how the concept of distribution is conceived in structuralist/autonomous and usage-based/functionalist theoretical families of contemporary linguistics. The two theoretical positions on distributions are studied for identifying how meanings could enter as enabling or constraining factors in them. The empirical part of the study comprises two case studies. In the first one, three pairs of antonymical adjectives (köyhä/rikas, sairas/terve and vanha/nuori) are studied distributionally. Very narrow bag-of-word vector representations of distributions show how the dimensions on which relevant distributional similarities are based already conflate unexpected and varied range of linguistic phenomena, spanning from syntax-oriented conceptual constrainment to connotations, pragmatic patterns and affectivity. Thus, the results simultaneously corroborate the distributional hypothesis and challenge its over-generalized, uncritical applicability. For the study of meaning, distributional and semantic spaces cannot be treated as analogous by default. In the second case study, a distributional operation is purposefully built for answering a research question related to historical development of Finnish social law terminology in the period of 1860–1910. Using a method based on interlinked collocation networks, the study shows how the term vaivainen (‘pauper, beggar, measly’) receded from the prestigious legal and administrative registers during the studied period. Corroborating some of the findings of the previous parts of this dissertation, the case study shows how structures found in distributional representations cannot be satisfactorily explained without relying on semantic, pragmatic and discoursal interpretations. The analysis leads to confirming the timeline of the studied word use in the given register. It also shows how the distributional methods based on networked patterns of co-occurrence highlight incomparable structures of very different nature and skew towards frequent occurrence types prevalent in the data.Nykyaikaiset laskennalliset menetelmät suorittavat suurista tekstiaineistoista koottujen tilastollisten mallien avulla lähes virheettömästi monia sanojen merkitysten ymmärtämistä edellyttäviä tehtäviä. Kielitieteellisen metodologian kannalta onkin kiinnostavaa, miten tällaiset menetelmät sopivat kiellisten rakenteiden merkitysten lingvistiseen tutkimukseen. Tämä väitöstutkimus lähestyy kysymystä sanasemantiikan näkökulmasta ja pyrkii sekä teoreettisesti että empiirisesti kuvaamaan minkälaisia merkityksen lajeja pelkkiin sanojen sekvensseihin perustuvat laskennalliset menetelmät kykenevät tavoittamaan. Väitöstutkimus koostuu kahdesta osatutkimuksesta, joista ensimmäisessä tutkitaan kolmea vastakohtaista adjektiiviparia Suomi24-aineistosta kootun vektoriavaruusmallin avulla. Tulokset osoittavat, miten jo hyvin rajatut sekvenssiympäristöt sisältävät informaatiota käsitteellisten merkitysten lisäksi myös muun muassa niiden konnotaatioista ja affektiivisuudesta. Sekvenssiympäristön tuottama kuva merkityksestä on kuitenkin kattavuudeltaan ennalta-arvaamaton ja ne kielekäyttötavat, jotka tutkimusaineistossa ovat yleisiä vaikuttavat selvästi siihen mitä merkityksen piirteitä tulee näkyviin. Toisessa osatutkimuksessa jäljitetään erään sosiaalioikeudellisen termin, vaivaisen, historiaa 1800-luvun loppupuolella Kansalliskirjaston historiallisesta digitaalisesta sanomalehtikokoelmasta. Myötäesiintymäverkostojen avulla pyritään selvittämään miten se katosi oikeuskielestä tunnistamalla aineistosta hallinnollis-juridista rekisteriä vastaava rakenne ja seuraamalla vaivaisen asemaa siinä. Menetelmänä käytetyt myötäesiintymäverkostot eivät kuitenkaan edusta puhtaasti mitään tiettyä rekisteriä, vaan sekoittavat itseensä piirteitä erilaisista kategorioista, joilla kielen käyttöä on esimerkiksi tekstintutkimuksessa kuvattu. Tiheimmät verkostot muodostuvat rekisterien, genrejen, tekstityyppien ja sanastollisen koheesion yhteisvaikutuksesta. Osatutkimuksen tulokset antavat viitteitä siitä, että tämä on yleinen piirre monissa samankaltaisissa menetelmissä, mukaan lukien yleiset aihemallit

    The Automatic Acquisition of Knowledge about Discourse Connectives

    Get PDF
    Institute for Communicating and Collaborative SystemsThis thesis considers the automatic acquisition of knowledge about discourse connectives. It focuses in particular on their semantic properties, and on the relationships that hold between them. There is a considerable body of theoretical and empirical work on discourse connectives. For example, Knott (1996) motivates a taxonomy of discourse connectives based on relationships between them, such as HYPONYMY and EXCLUSIVE, which are defined in terms of substitution tests. Such work requires either great theoretical insight or manual analysis of large quantities of data. As a result, to date no manual classification of English discourse connectives has achieved complete coverage. For example, Knott gives relationships between only about 18% of pairs obtained from a list of 350 discourse connectives. This thesis explores the possibility of classifying discourse connectives automatically, based on their distributions in texts. This thesis demonstrates that state-of-the-art techniques in lexical acquisition can successfully be applied to acquiring information about discourse connectives. Central to this thesis is the hypothesis that distributional similarity correlates positively with semantic similarity. Support for this hypothesis has previously been found for word classes such as nouns and verbs (Miller and Charles, 1991; Resnik and Diab, 2000, for example), but there has been little exploration of the degree to which it also holds for discourse connectives. We investigate the hypothesis through a number of machine learning experiments. These experiments all use unsupervised learning techniques, in the sense that they do not require any manually annotated data, although they do make use of an automatic parser. First, we show that a range of semantic properties of discourse connectives, such as polarity and veridicality (whether or not the semantics of a connective involves some underlying negation, and whether the connective implies the truth of its arguments, respectively), can be acquired automatically with a high degree of accuracy. Second, we consider the tasks of predicting the similarity and substitutability of pairs of discourse connectives. To assist in this, we introduce a novel information theoretic function based on variance that, in combination with distributional similarity, is useful for learning such relationships. Third, we attempt to automatically construct taxonomies of discourse connectives capturing substitutability relationships. We introduce a probability model of taxonomies, and show that this can improve accuracy on learning substitutability relationships. Finally, we develop an algorithm for automatically constructing or extending such taxonomies which uses beam search to help find the optimal taxonomy
    corecore