37 research outputs found

    Extraction de relations d'hyperonymie à partir de Wikipédia

    Get PDF
    Ce travail contribue à montrer l'intérêt d'exploiter la structure des documents accessibles sur le Web pour enrichir des bases de connaissances sémantiques. En effet, ces bases de connaissances jouent un rôle clé dans de nombreuses applications du TAL, Web sémantique, recherche d'information, aide au diagnostic, etc. Dans ce contexte, nous nous sommes intéressés ici à l'identification des relations d'hyperonymie présentes dans les pages de désambiguïsation de Wikipédia. Un extracteur de relations d'hyperonymie dédié à ce type de page et basé sur des patrons lexico-syntaxiques a été conçu, développé et évalué. Les résultats obtenus indiquent une précision de 0.68 et un rappel de 0.75 pour les patrons que nous avons définis, et un taux d'enrichissement de 33% pour les deux ressources sémantiques BabelNet et DBPédia

    Extraction de relations : Exploiter des techniques complémentaires pour mieux s'adapter au type de texte

    Get PDF
    Extraire des relations d'hyperonymie à partir des textes est une des étapes clés de la construction automatique d'ontologies et du peuplement de bases de connaissances. Plusieurs types de méthodes (linguistiques, statistiques, combinées) ont été exploités par une variété de propositions dans la littérature. Les apports respectifs et la complémentarité de ces méthodes sont cependant encore mal identifiés pour optimiser leur combinaison. Dans cet article, nous nous intéressons à la complémentarité de deux méthodes de nature différente, l'une basée sur les patrons linguistiques, l'autre sur l'apprentissage supervisé, pour identifier la relation d'hyperonymie à travers différents modes d'expression. Nous avons appliqué ces méthodes à un sous-corpus de Wikipedia en français, composé des pages de désambiguïsation. Ce corpus se prête bien à la mise en oeuvre des deux approches retenues car ces textes sont particulièrement riches en relations d'hyperonymie, et contiennent à la fois des formulations rédigées et d'autres syntaxiquement pauvres. Nous avons comparé les résultats des deux méthodes prises indépendamment afin d'établir leurs performances respectives, et de les comparer avec le résultat des deux méthodes appliquées ensemble. Les meilleurs résultats obtenus correspondent à ce dernier cas de figure avec une F-mesure de 0.68. De plus, l'extracteur Wikipedia issu de ce travail permet d'enrichir la ressource sémantique DBPedia en français : 55% des relations identifiées par notre extracteur ne sont pas déjà présentes dans DBPedia

    The Future of Biotechnology Crime: A Parallel Delphi Study with Non-Traditional Experts

    Get PDF
    BACKGROUND: The way science is practiced is changing and forecasting biotechnology crime trends remains a challenge as future misuses become more sophisticated. METHODS: A parallel Delphi study was conducted to elicit future biotechnology scenarios from two groups of experts. Traditional experts, such as professionals in national security/intelligence, were interviewed. They were asked to forecast emerging crime trends facilitated by biotechnology and what should be done to safeguard against them. Non-traditional experts, such as “biohackers” who experiment with biotechnology in unexpected ways, were also interviewed. The study entailed three rounds to obtain consensus on (i) biotechnology misuse anticipated and (ii) potential prevention strategies expected. RESULTS: Traditional and non-traditional experts strongly agreed that misuse is anticipated within the cyber-infrastructure of, for example, medical devices and hospitals, through breaches and corporate espionage. Preventative steps that both groups strongly advocated involved increasing public biosecurity literacy, and funding towards addressing biotechnology security. Both groups agreed that the responsibility for mitigation includes government bodies. Non-traditional experts generated more scenarios and had a greater diversity of views. DISCUSSION: A systematic, anonymous and independent interaction with a diverse panel of experts provided meaningful insights for anticipating emerging trends in biotechnology crime. A multi-sector intervention strategy is proposed

    Semantic Enrichment of Ontology Mappings

    Get PDF
    Schema and ontology matching play an important part in the field of data integration and semantic web. Given two heterogeneous data sources, meta data matching usually constitutes the first step in the data integration workflow, which refers to the analysis and comparison of two input resources like schemas or ontologies. The result is a list of correspondences between the two schemas or ontologies, which is often called mapping or alignment. Many tools and research approaches have been proposed to automatically determine those correspondences. However, most match tools do not provide any information about the relation type that holds between matching concepts, for the simple but important reason that most common match strategies are too simple and heuristic to allow any sophisticated relation type determination. Knowing the specific type holding between two concepts, e.g., whether they are in an equality, subsumption (is-a) or part-of relation, is very important for advanced data integration tasks, such as ontology merging or ontology evolution. It is also very important for mappings in the biological or biomedical domain, where is-a and part-of relations may exceed the number of equality correspondences by far. Such more expressive mappings allow much better integration results and have scarcely been in the focus of research so far. In this doctoral thesis, the determination of the correspondence types in a given mapping is the focus of interest, which is referred to as semantic mapping enrichment. We introduce and present the mapping enrichment tool STROMA, which obtains a pre-calculated schema or ontology mapping and for each correspondence determines a semantic relation type. In contrast to previous approaches, we will strongly focus on linguistic laws and linguistic insights. By and large, linguistics is the key for precise matching and for the determination of relation types. We will introduce various strategies that make use of these linguistic laws and are able to calculate the semantic type between two matching concepts. The observations and insights gained from this research go far beyond the field of mapping enrichment and can be also applied to schema and ontology matching in general. Since generic strategies have certain limits and may not be able to determine the relation type between more complex concepts, like a laptop and a personal computer, background knowledge plays an important role in this research as well. For example, a thesaurus can help to recognize that these two concepts are in an is-a relation. We will show how background knowledge can be effectively used in this instance, how it is possible to draw conclusions even if a concept is not contained in it, how the relation types in complex paths can be resolved and how time complexity can be reduced by a so-called bidirectional search. The developed techniques go far beyond the background knowledge exploitation of previous approaches, and are now part of the semantic repository SemRep, a flexible and extendable system that combines different lexicographic resources. Further on, we will show how additional lexicographic resources can be developed automatically by parsing Wikipedia articles. The proposed Wikipedia relation extraction approach yields some millions of additional relations, which constitute significant additional knowledge for mapping enrichment. The extracted relations were also added to SemRep, which thus became a comprehensive background knowledge resource. To augment the quality of the repository, different techniques were used to discover and delete irrelevant semantic relations. We could show in several experiments that STROMA obtains very good results w.r.t. relation type detection. In a comparative evaluation, it was able to achieve considerably better results than related applications. This corroborates the overall usefulness and strengths of the implemented strategies, which were developed with particular emphasis on the principles and laws of linguistics

    The Impact of Lexical and Cohesive Devices Knowledge on 11th Graders' Reading Comprehension

    Get PDF
    This Study aimed at recognizing the impact of vocabulary and cohesive devices knowledge, especially pronouns and conjunctions, on the literary 11th graders' reading comprehension. The researcher here applied pre and post tests on a random sample of two intact classes of sixty literary 11th male graders divided into control and experimental groups. These sixty learners represented nearly 38 % of the learners the researcher has been teaching English. The first part of the pre-test represented vocabulary test and the second one; pronouns and conjunctions test. This pre- test was applied on the learners' of both groups to diagnose their abilities and to know whether both groups were equal in their knowledge. After the researcher made sure that both groups were approximately equal regarding their previous knowledge in terms of vocabulary, pronouns and conjunctions, he subjected these graders to some treatment during eight lessons through three texts from the graders' syllabus in terms of vocabulary and the meant devices. After that, the researcher carried out a post-test to identify the effect of knowledge of vocabulary and cohesive devices on students' reading comprehension skill. Both tests were carried out during the second term of 2011. The researcher discovered that each independent variable, either vocabulary or pronouns, remarkably and positively affected reading comprehension. Moreover, each independent variable has the ability to predict reading comprehension. However, vocabulary affected reading comprehension more than pronouns and conjunctions did. In conclusion, the researcher recommended carrying out further studies to identify the effect of either increasing or decreasing pronouns in a text on reading comprehension and critical thinking

    Constructive Ontology Engineering

    Get PDF
    The proliferation of the Semantic Web depends on ontologies for knowledge sharing, semantic annotation, data fusion, and descriptions of data for machine interpretation. However, ontologies are difficult to create and maintain. In addition, their structure and content may vary depending on the application and domain. Several methods described in literature have been used in creating ontologies from various data sources such as structured data in databases or unstructured text found in text documents or HTML documents. Various data mining techniques, natural language processing methods, syntactical analysis, machine learning methods, and other techniques have been used in building ontologies with automated and semi-automated processes. Due to the vast amount of unstructured text and its continued proliferation, the problem of constructing ontologies from text has attracted considerable attention for research. However, the constructed ontologies may be noisy, with missing and incorrect knowledge. Thus ontology construction continues to be a challenging research problem. The goal of this research is to investigate a new method for guiding a process of extracting and assembling candidate terms into domain specific concepts and relationships. The process is part of an overall semi automated system for creating ontologies from unstructured text sources and is driven by the user’s goals in an incremental process. The system applies natural language processing techniques and uses a series of syntactical analysis tools for extracting grammatical relations from a list of text terms representing the parts of speech of a sentence. The extraction process focuses on evaluating the subject predicate-object sequences of the text for potential concept-relation-concept triples to be built into an ontology. Users can guide the system by selecting seedling concept-relation-concept triples to assist building concepts from the extracted domain specific terms. As a result, the ontology building process develops into an incremental one that allows the user to interact with the system, to guide the development of an ontology, and to tailor the ontology for the user’s application needs. The main contribution of this work is the implementation and evaluation of a new semi- automated methodology for constructing domain specific ontologies from unstructured text corpus

    Exploiting the conceptual space in hybrid recommender systems: a semantic-based approach

    Full text link
    Tesis doctoral inédita. Universidad Autónoma de Madrid, Escuela Politécnica Superior, octubre de 200
    corecore