472 research outputs found

    One Homonym per Translation

    Full text link
    The study of homonymy is vital to resolving fundamental problems in lexical semantics. In this paper, we propose four hypotheses that characterize the unique behavior of homonyms in the context of translations, discourses, collocations, and sense clusters. We present a new annotated homonym resource that allows us to test our hypotheses on existing WSD resources. The results of the experiments provide strong empirical evidence for the hypotheses. This study represents a step towards a computational method for distinguishing between homonymy and polysemy, and constructing a definitive inventory of coarse-grained senses.Comment: 8 pages, including reference

    Polysemy and Co-predication

    Get PDF
    Many word forms in natural language are polysemous, but only some of them allow for co-predication, that is, they allow for simultaneous predications selecting for two different meanings or senses of a nominal in a sentence. In this paper, we try to explain (i) why some groups of senses allow co-predication and others do not, and (ii) how we interpret co-predicative sentences. The paper focuses on those groups of senses that allow co-predication in an especially robust and stable way. We argue, using these cases, but focusing particularly on the multiply polysemous word ‘school’, that the senses involved in co-predication form especially robust activation packages, which allow hearers and readers to access all the different senses in interpretation

    Polysemy and word meaning: an account of lexical meaning for different kinds of content words

    Get PDF
    There is an ongoing debate about the meaning of lexical words, i.e., words that contribute with content to the meaning of sentences. This debate has coincided with a renewal in the study of polysemy, which has taken place in the psycholinguistics camp mainly. There is already a fruitful interbreeding between two lines of research: the theoretical study of lexical word meaning, on the one hand, and the models of polysemy psycholinguists present, on the other. In this paper I aim at deepening on this ongoing interbreeding, examine what is said about polysemy, particularly in the psycholinguistics literature, and then show how what we seem to know about the representation and storage of polysemous senses affects the models that we have about lexical word meaning

    On link predictions in complex networks with an application to ontologies and semantics

    Get PDF
    It is assumed that ontologies can be represented and treated as networks and that these networks show properties of so-called complex networks. Just like ontologies “our current pictures of many networks are substantially incomplete” (Clauset et al., 2008, p. 3ff.). For this reason, networks have been analyzed and methods for identifying missing edges have been proposed. The goal of this thesis is to show how treating and understanding an ontology as a network can be used to extend and improve existing ontologies, and how measures from graph theory and techniques developed in social network analysis and other complex networks in recent years can be applied to semantic networks in the form of ontologies. Given a large enough amount of data, here data organized according to an ontology, and the relations defined in the ontology, the goal is to find patterns that help reveal implicitly given information in an ontology. The approach does not, unlike reasoning and methods of inference, rely on predefined patterns of relations, but it is meant to identify patterns of relations or of other structural information taken from the ontology graph, to calculate probabilities of yet unknown relations between entities. The methods adopted from network theory and social sciences presented in this thesis are expected to reduce the work and time necessary to build an ontology considerably by automating it. They are believed to be applicable to any ontology and can be used in either supervised or unsupervised fashion to automatically identify missing relations, add new information, and thereby enlarge the data set and increase the information explicitly available in an ontology. As seen in the IBM Watson example, different knowledge bases are applied in NLP tasks. An ontology like WordNet contains lexical and semantic knowl- edge on lexemes while general knowledge ontologies like Freebase and DBpedia contain information on entities of the non-linguistic world. In this thesis, examples from both kinds of ontologies are used: WordNet and DBpedia. WordNet is a manually crafted resource that establishes a network of representations of word senses, connected to the word forms used to express these, and connect these senses and forms with lexical and semantic relations in a machine-readable form. As will be shown, although a lot of work has been put into WordNet, it can still be improved. While it already contains many lexical and semantical relations, it is not possible to distinguish between polysemous and homonymous words. As will be explained later, this can be useful for NLP problems regarding word sense disambiguation and hence QA. Using graph- and network-based centrality and path measures, the goal is to train a machine learning model that is able to identify new, missing relations in the ontology and assign this new relation to the whole data set (i.e., WordNet). The approach presented here will be based on a deep analysis of the ontology and the network structure it exposes. Using different measures from graph theory as features and a set of manually created examples, a so-called training set, a supervised machine learning approach will be presented and evaluated that will show what the benefit of interpreting an ontology as a network is compared to other approaches that do not take the network structure into account. DBpedia is an ontology derived from Wikipedia. The structured information given in Wikipedia infoboxes is parsed and relations according to an underlying ontology are extracted. Unlike Wikipedia, it only contains the small amount of structured information (e.g., the infoboxes of each page) and not the large amount of unstructured information (i.e., the free text) of Wikipedia pages. Hence DBpedia is missing a large number of possible relations that are described in Wikipedia. Also compared to Freebase, an ontology used and maintained by Google, DBpedia is quite incomplete. This, and the fact that Wikipedia is expected to be usable to compare possible results to, makes DBpedia a good subject of investigation. The approach used to extend DBpedia presented in this thesis will be based on a thorough analysis of the network structure and the assumed evolution of the network, which will point to the locations of the network where information is most likely to be missing. Since the structure of the ontology and the resulting network is assumed to reveal patterns that are connected to certain relations defined in the ontology, these patterns can be used to identify what kind of relation is missing between two entities of the ontology. This will be done using unsupervised methods from the field of data mining and machine learning

    The Study of Thesaural Relationships from a Semantic Point of View

    Get PDF
    Thesaurus is one, out of many, precious tool in information technology by which information specialists can optimize storage and retrieval of documents in scientific databases and on the web. In recent years, there has been a shift from thesaurus to ontology by downgrading thesaurus in favor of ontology. It is because thesaurus cannot meet the needs of information management because it cannot create a rich knowledge-based description of documents. It is claimed that the thesaural relationships are restricted and insufficient. The writers in this paper show that thesaural relationships are not inadequate and restricted as they are said to be but quite the opposite they cover all semantic relations and can increase the possibility of successful storage and retrieval of documents. This study shows that thesauri are semantically optimal and they cover all lexical relations; therefore, thesauri can continue as suitable tools for knowledge management

    Decoding of metaphoric form of homonymous scientific term by a linguist and an expert

    Get PDF
    The article considers the problem of distinguishing terminological homonymy as a semantic category, and an attempt to model the process of decoding (understanding) metaphorical homonymous scientific terms is made. Integrating the conceptual provisions of the term theory, the theory of metaphor and the category of homonymy, the author offers the scheme of logical (categorial) and semantic analyses of the dictionary definition of genetic homonymic terms - from the metaphorical form to the special concept. The process of the homonymous terms deciphering is limited in this research to two steps: 1) to establish the linguistic term form motivation, 2) to determine the denotation of special concepts designated by one form. As a result of the analysis of dictionary definitions of genetic terms four types of a homonymy - intralingual, lexical, interscientific and mixed have been identified, and also the features of associative chains forming by the linguist-terminologist and the expert in the process of distinguishing of homonymous terms content are described

    Polysemy in Advertising

    Get PDF
    The article reviews the conceptual foundations of advertising polysemy – the occurrence of different interpretations for the same advertising message. We discuss how disciplines as diverse as psychology, semiotics and literary theory have dealt with the issue of polysemy, and provide translations and integration among these multiple perspectives. From such review we draw recurrent themes to foster future research in the area and to show how seemingly opposed methodological and theoretical perspectives complement and extend each other. Implications for advertising research and practice are discussed.Advertising;Polysemy;Semiotics

    On the nature of the lexicon: the status of rich lexical meanings

    Get PDF
    The main goal of this paper is to show that there are many phenomena that pertain to the construction of truth-conditional compounds that follow characteristic patterns, and whose explanation requires appealing to knowledge structures organized in specific ways. We review a number of phenomena, ranging from non-homogenous modification and privative modification to polysemy and co-predication that indicate that knowledge structures do play a role in obtaining truth-conditions. After that, we show that several extant accounts that invoke rich lexical meanings to explain such phenomena face problems related to inflexibility and lack of predictive power. We review different ways in which one might react to such problems as regards lexical meanings: go richer, go moderately richer, go thinner, and go moderately thinner. On the face of it, it looks like moderate positions are unstable, given the apparent lack of a clear cutoff point between the semantic and the conceptual, but also that a very thin view and a very rich view may turn out to be indistinguishable in the long run. As far as we can see, the most pressing open questions concern this last issue: can there be a principled semantic/world knowledge distinction? Where could it be drawn: at some upper level (e.g. enriched qualia structures) or at some basic level (e.g. constraints)? How do parsimony considerations affect these two different approaches? A thin meanings approach postulates intermediate representations whose role is not clear in the interpretive process, while a rich meanings approach to lexical meaning seems to duplicate representations: the same representations that are stored in the lexicon would form part of conceptual representations. Both types of parsimony problems would be solved by assuming a direct relation between word forms and (parts of) conceptual or world knowledge, leading to a view that has been attributed to Chomsky (e.g. by Katz 1980) in which there is just syntax and encyclopedic knowledge

    Ontologies across disciplines

    Get PDF
    corecore