34 research outputs found

    Wiktionary: The Metalexicographic and the Natural Language Processing Perspective

    Get PDF
    Dictionaries are the main reference works for our understanding of language. They are used by humans and likewise by computational methods. So far, the compilation of dictionaries has almost exclusively been the profession of expert lexicographers. The ease of collaboration on the Web and the rising initiatives of collecting open-licensed knowledge, such as in Wikipedia, caused a new type of dictionary that is voluntarily created by large communities of Web users. This collaborative construction approach presents a new paradigm for lexicography that poses new research questions to dictionary research on the one hand and provides a very valuable knowledge source for natural language processing applications on the other hand. The subject of our research is Wiktionary, which is currently the largest collaboratively constructed dictionary project. In the first part of this thesis, we study Wiktionary from the metalexicographic perspective. Metalexicography is the scientific study of lexicography including the analysis and criticism of dictionaries and lexicographic processes. To this end, we discuss three contributions related to this area of research: (i) We first provide a detailed analysis of Wiktionary and its various language editions and dictionary structures. (ii) We then analyze the collaborative construction process of Wiktionary. Our results show that the traditional phases of the lexicographic process do not apply well to Wiktionary, which is why we propose a novel process description that is based on the frequent and continual revision and discussion of the dictionary articles and the lexicographic instructions. (iii) We perform a large-scale quantitative comparison of Wiktionary and a number of other dictionaries regarding the covered languages, lexical entries, word senses, pragmatic labels, lexical relations, and translations. We conclude the metalexicographic perspective by finding that the collaborative Wiktionary is not an appropriate replacement for expert-built dictionaries due to its inconsistencies, quality flaws, one-fits-all-approach, and strong dependence on expert-built dictionaries. However, Wiktionary's rapid and continual growth, its high coverage of languages, newly coined words, domain-specific vocabulary and non-standard language varieties, as well as the kind of evidence based on the authors' intuition provide promising opportunities for both lexicography and natural language processing. In particular, we find that Wiktionary and expert-built wordnets and thesauri contain largely complementary entries. In the second part of the thesis, we study Wiktionary from the natural language processing perspective with the aim of making available its linguistic knowledge for computational applications. Such applications require vast amounts of structured data with high quality. Expert-built resources have been found to suffer from insufficient coverage and high construction and maintenance cost, whereas fully automatic extraction from corpora or the Web often yields resources of limited quality. Collaboratively built encyclopedias present a viable solution, but do not cover well linguistically oriented knowledge as it is found in dictionaries. That is why we propose extracting linguistic knowledge from Wiktionary, which we achieve by the following three main contributions: (i) We propose the novel multilingual ontology OntoWiktionary that is created by extracting and harmonizing the weakly structured dictionary articles in Wiktionary. A particular challenge in this process is the ambiguity of semantic relations and translations, which we resolve by automatic word sense disambiguation methods. (ii) We automatically align Wiktionary with WordNet 3.0 at the word sense level. The largely complementary information from the two dictionaries yields an aligned resource with higher coverage and an enriched representation of word senses. (iii) We represent Wiktionary according to the ISO standard Lexical Markup Framework, which we adapt to the peculiarities of collaborative dictionaries. This standardized representation is of great importance for fostering the interoperability of resources and hence the dissemination of Wiktionary-based research. To this end, our work presents a foundational step towards the large-scale integrated resource UBY, which facilitates a unified access to a number of standardized dictionaries by means of a shared web interface for human users and an application programming interface for natural language processing applications. A user can, in particular, switch between and combine information from Wiktionary and other dictionaries without completely changing the software. Our final resource and the accompanying datasets and software are publicly available and can be employed for multiple different natural language processing applications. It particularly fills the gap between the small expert-built wordnets and the large amount of encyclopedic knowledge from Wikipedia. We provide a survey of previous works utilizing Wiktionary, and we exemplify the usefulness of our work in two case studies on measuring verb similarity and detecting cross-lingual marketing blunders, which make use of our Wiktionary-based resource and the results of our metalexicographic study. We conclude the thesis by emphasizing the usefulness of collaborative dictionaries when being combined with expert-built resources, which bears much unused potential

    Semantic approaches to domain template construction and opinion mining from natural language

    Get PDF
    Most of the text mining algorithms in use today are based on lexical representation of input texts, for example bag of words. A possible alternative is to first convert text into a semantic representation, one that captures the text content in a structured way and using only a set of pre-agreed labels. This thesis explores the feasibility of such an approach to two tasks on collections of documents: identifying common structure in input documents (»domain template construction«), and helping users find differing opinions in input documents (»opinion mining«). We first discuss ways of converting natural text to a semantic representation. We propose and compare two new methods with varying degrees of target representation complexity. The first method, showing more promise, is based on dependency parser output which it converts to lightweight semantic frames, with role fillers aligned to WordNet. The second method structures text using Semantic Role Labeling techniques and aligns the output to the Cyc ontology. Based on the first of the above representations, we next propose and evaluate two methods for constructing frame-based templates for documents from a given domain (e.g. bombing attack news reports). A template is the set of all salient attributes (e.g. attacker, number of casualties, \ldots). The idea of both methods is to construct abstract frames for which more specific instances (according to the WordNet hierarchy) can be found in the input documents. Fragments of these abstract frames represent the sought-for attributes. We achieve state of the art performance and additionally provide detailed type constraints for the attributes, something not possible with competing methods. Finally, we propose a software system for exposing differing opinions in the news. For any given event, we present the user with all known articles on the topic and let them navigate them by three semantic properties simultaneously: sentiment, topical focus and geography of origin. The result is a dynamically reranked set of relevant articles and a near real time focused summary of those articles. The summary, too, is computed from the semantic text representation discussed above. We conducted a user study of the whole system with very positive results

    Multiword expressions at length and in depth

    Get PDF
    The annual workshop on multiword expressions takes place since 2001 in conjunction with major computational linguistics conferences and attracts the attention of an ever-growing community working on a variety of languages, linguistic phenomena and related computational processing issues. MWE 2017 took place in Valencia, Spain, and represented a vibrant panorama of the current research landscape on the computational treatment of multiword expressions, featuring many high-quality submissions. Furthermore, MWE 2017 included the first shared task on multilingual identification of verbal multiword expressions. The shared task, with extended communal work, has developed important multilingual resources and mobilised several research groups in computational linguistics worldwide. This book contains extended versions of selected papers from the workshop. Authors worked hard to include detailed explanations, broader and deeper analyses, and new exciting results, which were thoroughly reviewed by an internationally renowned committee. We hope that this distinctly joint effort will provide a meaningful and useful snapshot of the multilingual state of the art in multiword expressions modelling and processing, and will be a point point of reference for future work

    Towards a system of concepts for Family Medicine. Multilingual indexing in General Practice/ Family Medicine in the era of Semantic Web

    Get PDF
    UNIVERSITY OF LIÈGE, BELGIUM Executive Summary Faculty of Medicine Département Universitaire de Médecine Générale. Unité de recherche Soins Primaires et Santé Doctor in biomedical sciences Towards a system of concepts for Family Medicine. Multilingual indexing in General Practice/ Family Medicine in the era of SemanticWeb by Dr. Marc JAMOULLE Introduction This thesis is about giving visibility to the often overlooked work of family physicians and consequently, is about grey literature in General Practice and Family Medicine (GP/FM). It often seems that conference organizers do not think of GP/FM as a knowledge-producing discipline that deserves active dissemination. A conference is organized, but not much is done with the knowledge shared at these meetings. In turn, the knowledge cannot be reused or reapplied. This these is also about indexing. To find knowledge back, indexing is mandatory. We must prepare tools that will automatically index the thousands of abstracts that family doctors produce each year in various languages. And finally this work is about semantics1. It is an introduction to health terminologies, ontologies, semantic data, and linked open data. All are expressions of the next step: Semantic Web for health care data. Concepts, units of thought expressed by terms, will be our target and must have the ability to be expressed in multiple languages. In turn, three areas of knowledge are at stake in this study: (i) Family Medicine as a pillar of primary health care, (ii) computational linguistics, and (iii) health information systems. Aim • To identify knowledge produced by General practitioners (GPs) by improving annotation of grey literature in Primary Health Care • To propose an experimental indexing system, acting as draft for a standardized table of content of GP/GM • To improve the searchability of repositories for grey literature in GP/GM. 1For specific terms, see the Glossary page 257 x Methods The first step aimed to design the taxonomy by identifying relevant concepts in a compiled corpus of GP/FM texts. We have studied the concepts identified in nearly two thousand communications of GPs during conferences. The relevant concepts belong to the fields that are focusing on GP/FM activities (e.g. teaching, ethics, management or environmental hazard issues). The second step was the development of an on-line, multilingual, terminological resource for each category of the resulting taxonomy, named Q-Codes. We have designed this terminology in the form of a lightweight ontology, accessible on-line for readers and ready for use by computers of the semantic web. It is also fit for the Linked Open Data universe. Results We propose 182 Q-Codes in an on-line multilingual database (10 languages) (www.hetop.eu/Q) acting each as a filter for Medline. Q-Codes are also available under the form of Unique Resource Identifiers (URIs) and are exportable in Web Ontology Language (OWL). The International Classification of Primary Care (ICPC) is linked to Q-Codes in order to form the Core Content Classification in General Practice/Family Medicine (3CGP). So far, 3CGP is in use by humans in pedagogy, in bibliographic studies, in indexing congresses, master theses and other forms of grey literature in GP/FM. Use by computers is experimented in automatic classifiers, annotators and natural language processing. Discussion To the best of our knowledge, this is the first attempt to expand the ICPC coding system with an extension for family physician contextual issues, thus covering non-clinical content of practice. It remains to be proven that our proposed terminology will help in dealing with more complex systems, such as MeSH, to support information storage and retrieval activities. However, this exercise is proposed as a first step in the creation of an ontology of GP/FM and as an opening to the complex world of Semantic Web technologies. Conclusion We expect that the creation of this terminological resource for indexing abstracts and for facilitating Medline searches for general practitioners, researchers and students in medicine will reduce loss of knowledge in the domain of GP/FM. In addition, through better indexing of the grey literature (congress abstracts, master’s and doctoral theses), we hope to enhance the accessibility of research results and give visibility to the invisible work of family physicians
    corecore