25 research outputs found

    SEMANTIQUE DISTRIBUTIONNELLE

    Get PDF
    This special issue contains state-of-the-art papers on distributional semantic

    Unsupervised Induction of Frame-Based Linguistic Forms

    Get PDF
    This thesis studies the use of bulk, structured, linguistic annotations in order to perform unsupervised induction of meaning for three kinds of linguistic forms: words, sentences, and documents. The primary linguistic annotation I consider throughout this thesis are frames, which encode core linguistic, background or societal knowledge necessary to understand abstract concepts and real-world situations. I begin with an overview of linguistically-based structured meaning representation; I then analyze available large-scale natural language processing (NLP) and linguistic resources and corpora for their abilities to accommodate bulk, automatically-obtained frame annotations. I then proceed to induce meanings of the different forms, progressing from the word level, to the sentence level, and finally to the document level. I first show how to use these bulk annotations in order to better encode linguistic- and cognitive science backed semantic expectations within word forms. I then demonstrate a straightforward approach for learning large lexicalized and refined syntactic fragments, which encode and memoize commonly used phrases and linguistic constructions. Next, I consider two unsupervised models for document and discourse understanding; one is a purely generative approach that naturally accommodates layer annotations and is the first to capture and unify a complete frame hierarchy. The other conditions on limited amounts of external annotations, imputing missing values when necessary, and can more readily scale to large corpora. These discourse models help improve document understanding and type-level understanding

    Distributional Semantic Models of Attribute Meaning in Adjectives and Nouns

    Get PDF
    Attributes such as SIZE, WEIGHT or COLOR are at the core of conceptualization, i.e., the formal representation of entities or events in the real world. In natural language, formal attributes find their counterpart in attribute nouns which can be used in order to generalize over individual properties (e.g., 'big' or 'small' in case of SIZE, 'blue' or 'red' in case of COLOR). In order to ascribe such properties to entities or events, adjective-noun phrases are a very frequent linguistic pattern (e.g., 'a blue shirt', 'a big lion'). In these constructions, attribute meaning is conveyed only implicitly, i.e., without being overtly realized at the phrasal surface. This thesis is about modeling attribute meaning in adjectives and nouns in a distributional semantics framework. This implies the acquisition of meaning representations for adjectives, nouns and their phrasal combination from corpora of natural language text in an unsupervised manner, without tedious handcrafting or manual annotation efforts. These phrase representations can be used to predict implicit attribute meaning from adjective-noun phrases -- a problem which will be referred to as attribute selection throughout this thesis. The approach to attribute selection proposed in this thesis is framed in structured distributional models. We model adjective and noun meanings as distinct semantic vectors in the same semantic space spanned by attributes as dimensions of meaning. Based on these word representations, we make use of vector composition operations in order to construct a phrase representation from which the most prominent attribute(s) being expressed in the compositional semantics of the adjective-noun phrase can be selected by means of an unsupervised selection function. This approach not only accounts for the linguistic principle of compositionality that underlies adjective-noun phrases, but also avoids inherent sparsity issues that result from the fact that the relationship between an adjective, a noun and a particular attribute is rarely explicitly observed in corpora. The attribute models developed in this thesis aim at a reconciliation of the conflict between specificity and sparsity in distributional semantic models. For this purpose, we compare various instantiations of attribute models capitalizing on pattern-based and dependency-based distributional information as well as attribute-specific latent topics induced from a weakly supervised adaptation of Latent Dirichlet Allocation. Moreover, we propose a novel framework of distributional enrichment in order to enhance structured vector representations by incorporating additional lexical information from complementary distributional sources. In applying distributional enrichment to distributional attribute models, we follow the idea to augment structured representations of adjectives and nouns to centroids of their nearest neighbours in semantic space, while keeping the principle of meaning representation along structured, interpretable dimensions intact. We evaluate our attribute models in several experiments on the attribute selection task framed for various attribute inventories, ranging from a thoroughly confined set of ten core attributes up to a large-scale set of 260 attributes. Our results show that large-scale attribute selection from distributional vector representations that have been acquired in an unsupervised setting is a challenging endeavor that can be rendered more feasible by restricting the semantic space to confined subsets of attributes. Beyond quantitative evaluation, we also provide a thorough analysis of performance factors (based on linear regression) that influence the effectiveness of a distributional attribute model for attribute selection. This investigation reflects strengths and weaknesses of the model and sheds light on the impact of a variety of linguistic factors involved in attribute selection, e.g., the relative contribution of adjective and noun meaning. In conclusion, we consider our work on attribute selection as an instructive showcase for applying methods from distributional semantics in the broader context of knowledge acquisition from text in order to alleviate issues that are related to implicitness and sparsity

    Distributional Semantic Models of Attribute Meaning in Adjectives and Nouns

    Get PDF
    Hartung M. Distributional Semantic Models of Attribute Meaning in Adjectives and Nouns. Heidelberg: Universität Heidelberg; 2015

    Transforming Graph Representations for Statistical Relational Learning

    Full text link
    Relational data representations have become an increasingly important topic due to the recent proliferation of network datasets (e.g., social, biological, information networks) and a corresponding increase in the application of statistical relational learning (SRL) algorithms to these domains. In this article, we examine a range of representation issues for graph-based relational data. Since the choice of relational data representation for the nodes, links, and features can dramatically affect the capabilities of SRL algorithms, we survey approaches and opportunities for relational representation transformation designed to improve the performance of these algorithms. This leads us to introduce an intuitive taxonomy for data representation transformations in relational domains that incorporates link transformation and node transformation as symmetric representation tasks. In particular, the transformation tasks for both nodes and links include (i) predicting their existence, (ii) predicting their label or type, (iii) estimating their weight or importance, and (iv) systematically constructing their relevant features. We motivate our taxonomy through detailed examples and use it to survey and compare competing approaches for each of these tasks. We also discuss general conditions for transforming links, nodes, and features. Finally, we highlight challenges that remain to be addressed

    Time and tense in English

    Get PDF
    It has not been my aim to provide conclusive evidence for or against anyone hypothesis regarding Time and Tense. I have simply attempted to collect together and collate much of what has been written on the topic of tense in English, in order to show what the current trends of thought are. In Chapter One I presented a brief survey of some of the more basic notions associated with time and tense, in order to provide a background for the more linguistic approach to follow. I therefore examined such issues as the difference between time and tense, the problem of the passage and directionality of time, of the present moment, time and space , tense as a universal, "and various features of tense systems. I sketched Bull's system of scalars, vectors and axes as representative of our English tense system. Chapter Two dealt with time and logic, but as I am a mere layman in matters logical, I refrained from discussing any individual logical system in depth, and rather discussed various problems which appear to confront the logician in formulating a tensed or tenseless logic. This chapter aimed at providing a better understanding of the linguistic issues to follow, for time and logic are intimately connected with language. Chapter Three was more linguistically oriented, and in it I attempted to provide a broad outline of the development of thoughts about tense before the Transformationalist period (pre 1960). Because of the vast scope involved, I had, perforce, to be brief at times. I gave attention to tense in classical grammatical studies, and summarized how it was seen from about 1500 to 1800. I gave more detailed treatment to the twentieth century, focussing specifically on grammarians like Jespersen (1933), Twaddell (1960), Ota (1963), Palmer (1965) and others - all, writers typical of the structuralist era. At the end of Chapter Three I provided an overall summary of ideas on the main tenses by the end of the structuralist period - ideas which were to change radically within the next few years. In Chapter Four I discussed the ideas of tense of some of the main transformationalist/generativists - Diver (1964), Crystal (1966), Huddlestone (1968), Gallagher (1970), McCawley (1971) and Seuren (1974), in an attempt to show how theories on tense were becoming increasingly abstract, and how most data indicated that it is highly probable that tense is an abstract higher predicate of the sentence in which it appears in surface structure, closely related to temporal adverbs. Chapter Five continued in the same vein. I tried to show, using syntactic tests, that tense is a higher predicate, and used arguments involving Conjunction Reduction (based on Kiparsky (1968)), VP Constituency, Sequence of Tense, Pronominalization, and Quantification. In Chapter Six I focussed more closely on tense-time adverbials, in order to show that they have the same syntactic properties as tense, are also probably deep superordinate predicates, and are closely related to tense. My suggestion was that either tense is derived from temporal adverbs or vice versa, as this would simplify the grammar. The derivation procedures at the end of the chapter (6.8) were largely based on Hausmann (1971). I made no detailed reference to extralinguistic matters which affect tenses, in this study - such factors as are diScussed by G. Lakoff (1971) (presuppositions and relative well-formedness) and by R. Lakoff (1975). Tense is not a matter of pure Structuralism, just as language is not - extralinguistic factors ought to be accounted for before any study can claim to be conclusive. For this reason I do not in any way claim to have made an exhaustive study of time and tense - I have simply attempted to summarize and coordinate thoughts on the subject, and to suggest tentatively that the most adequate grammar of English would probably derive tense from underlying temporal adverbs

    Exploiting semantic web knowledge graphs in data mining

    Full text link
    Data Mining and Knowledge Discovery in Databases (KDD) is a research field concerned with deriving higher-level insights from data. The tasks performed in that field are knowledge intensive and can often benefit from using additional knowledge from various sources. Therefore, many approaches have been proposed in this area that combine Semantic Web data with the data mining and knowledge discovery process. Semantic Web knowledge graphs are a backbone of many information systems that require access to structured knowledge. Such knowledge graphs contain factual knowledge about real word entities and the relations between them, which can be utilized in various natural language processing, information retrieval, and any data mining applications. Following the principles of the Semantic Web, Semantic Web knowledge graphs are publicly available as Linked Open Data. Linked Open Data is an open, interlinked collection of datasets in machine-interpretable form, covering most of the real world domains. In this thesis, we investigate the hypothesis if Semantic Web knowledge graphs can be exploited as background knowledge in different steps of the knowledge discovery process, and different data mining tasks. More precisely, we aim to show that Semantic Web knowledge graphs can be utilized for generating valuable data mining features that can be used in various data mining tasks. Identifying, collecting and integrating useful background knowledge for a given data mining application can be a tedious and time consuming task. Furthermore, most data mining tools require features in propositional form, i.e., binary, nominal or numerical features associated with an instance, while Linked Open Data sources are usually graphs by nature. Therefore, in Part I, we evaluate unsupervised feature generation strategies from types and relations in knowledge graphs, which are used in different data mining tasks, i.e., classification, regression, and outlier detection. As the number of generated features grows rapidly with the number of instances in the dataset, we provide a strategy for feature selection in hierarchical feature space, in order to select only the most informative and most representative features for a given dataset. Furthermore, we provide an end-to-end tool for mining the Web of Linked Data, which provides functionalities for each step of the knowledge discovery process, i.e., linking local data to a Semantic Web knowledge graph, integrating features from multiple knowledge graphs, feature generation and selection, and building machine learning models. However, we show that such feature generation strategies often lead to high dimensional feature vectors even after dimensionality reduction, and also, the reusability of such feature vectors across different datasets is limited. In Part II, we propose an approach that circumvents the shortcomings introduced with the approaches in Part I. More precisely, we develop an approach that is able to embed complete Semantic Web knowledge graphs in a low dimensional feature space, where each entity and relation in the knowledge graph is represented as a numerical vector. Projecting such latent representations of entities into a lower dimensional feature space shows that semantically similar entities appear closer to each other. We use several Semantic Web knowledge graphs to show that such latent representation of entities have high relevance for different data mining tasks. Furthermore, we show that such features can be easily reused for different datasets and different tasks. In Part III, we describe a list of applications that exploit Semantic Web knowledge graphs, besides the standard data mining tasks, like classification and regression. We show that the approaches developed in Part I and Part II can be used in applications in various domains. More precisely, we show that Semantic Web graphs can be exploited for analyzing statistics, building recommender systems, entity and document modeling, and taxonomy induction. %In Part III, we focus on semantic annotations in HTML pages, which are another realization of the Semantic Web vision. Semantic annotations are integrated into the code of HTML pages using markup languages, like Microformats, RDFa, and Microdata. While such data covers various domains and topics, and can be useful for developing various data mining applications, additional steps of cleaning and integrating the data need to be performed. In this thesis, we describe a set of approaches for processing long literals and images extracted from semantic annotations in HTML pages. We showcase the approaches in the e-commerce domain. Such approaches contribute in building and consuming Semantic Web knowledge graphs
    corecore