11,805 research outputs found

    Ontologies and Information Extraction

    Full text link
    This report argues that, even in the simplest cases, IE is an ontology-driven process. It is not a mere text filtering method based on simple pattern matching and keywords, because the extracted pieces of texts are interpreted with respect to a predefined partial domain model. This report shows that depending on the nature and the depth of the interpretation to be done for extracting the information, more or less knowledge must be involved. This report is mainly illustrated in biology, a domain in which there are critical needs for content-based exploration of the scientific literature and which becomes a major application domain for IE

    From software APIs to web service ontologies: a semi-automatic extraction method

    Get PDF
    Successful employment of semantic web services depends on the availability of high quality ontologies to describe the domains of these services. As always, building such ontologies is difficult and costly, thus hampering web service deployment. Our hypothesis is that since the functionality offered by a web service is reflected by the underlying software, domain ontologies could be built by analyzing the documentation of that software. We verify this hypothesis in the domain of RDF ontology storage tools.We implemented and fine-tuned a semi-automatic method to extract domain ontologies from software documentation. The quality of the extracted ontologies was verified against a high quality hand-built ontology of the same domain. Despite the low linguistic quality of the corpus, our method allows extracting a considerable amount of information for a domain ontology

    Computational Models (of Narrative) for Literary Studies

    Get PDF
    In the last decades a growing body of literature in Artificial Intelligence (AI) and Cognitive Science (CS) has approached the problem of narrative understanding by means of computational systems. Narrative, in fact, is an ubiquitous element in our everyday activity and the ability to generate and understand stories, and their structures, is a crucial cue of our intelligence. However, despite the fact that - from an historical standpoint - narrative (and narrative structures) have been an important topic of investigation in both these areas, a more comprehensive approach coupling them with narratology, digital humanities and literary studies was still lacking. With the aim of covering this empty space, in the last years, a multidisciplinary effort has been made in order to create an international meeting open to computer scientist, psychologists, digital humanists, linguists, narratologists etc.. This event has been named CMN (for Computational Models of Narrative) and was launched in the 2009 by the MIT scholars Mark A. Finlayson and Patrick H. Winston1

    Intelligent machine for ontological representation of massive pedagogical knowledge based on neural networks

    Get PDF
    Higher education is increasingly integrating free learning management systems (LMS). The main objective underlying such systems integration is the automatization of online educational processes for the benefit of all the involved actors who use these systems. The said processes are developed through the integration and implementation of learning scenarios similar to traditional learning systems. LMS produce big data traces emerging from actors’ interactions in online learning. However, we note the absence of instruments adequate for representing knowledge extracted from big traces. In this context, the research at hand is aimed at transforming the big data produced via interactions into big knowledge that can be used in MOOCs by actors falling within a given learning level within a given learning domain, be it formal or informal. In order to achieve such an objective, ontological approaches are taken, namely: mapping, learning and enrichment, in addition to artificial intelligence-based approaches which are relevant in our research context. In this paper, we propose three interconnected algorithms for a better ontological representation of learning actors’ knowledge, while premising heavily on artificial intelligence approaches throughout the stages of this work. For verifying the validity of our contribution, we will implement an experiment about knowledge sources example

    Investigating the use of background knowledge for assessing the relevance of statements to an ontology in ontology evolution

    Get PDF
    The tasks of learning and enriching ontologies with new concepts and relations have attracted a lot of attention in the research community, leading to a number of tools facilitating the process of building and updating ontologies. These tools often discover new elements of information to be included in the considered ontology from external data sources such as text documents or databases, transforming these elements into ontology compatible statements or axioms. While some techniques are used to make sure that statements to be added are compatible with the ontology (e.g. through conflict detection), such tools generally pay little attention to the relevance of the statement in question. It is either assumed that any statement extracted from a data source is relevant, or that the user will assess whether a statement adds value to the ontology. In this paper, we investigate the use of background knowledge about the context where statements appear to assess their relevance. We devise a methodology to extract such a context from ontologies available online, to map it to the considered ontology and to visualize this mapping in a way that allows to study the intersection and complementarity of the two sources of knowledge. By applying this methodology on several examples, we identified an initial set of patterns giving strong indications concerning the relevance of a statement, as well as interesting issues to be considered when applying such techniques

    Improving automation standards via semantic modelling: Application to ISA88

    Get PDF
    Standardization is essential for automation. Extensibility, scalability, and reusability are important features for automation software that rely in the efficient modelling of the addressed systems. The work presented here is from the ongoing development of a methodology for semi-automatic ontology construction methodology from technical documents. The main aim of this work is to systematically check the consistency of technical documents and support the improvement of technical document consistency. The formalization of conceptual models and the subsequent writing of technical standards are simultaneously analyzed, and guidelines proposed for application to future technical standards. Three paradigms are discussed for the development of domain ontologies from technical documents, starting from the current state of the art, continuing with the intermediate method presented and used in this paper, and ending with the suggested paradigm for the future. The ISA88 Standard is taken as a representative case study. Linguistic techniques from the semi-automatic ontology construction methodology is applied to the ISA88 Standard and different modelling and standardization aspects that are worth sharing with the automation community is addressed. This study discusses different paradigms for developing and sharing conceptual models for the subsequent development of automation software, along with presenting the systematic consistency checking methodPeer ReviewedPostprint (author's final draft

    Integration of linked open data in case-based reasoning systems

    Get PDF
    This paper discusses the opportunities of integrating Linked Open Data (LOD) resources into Case-Based Reasoning (CBR) systems. Upon the application domain travel medicine, we will exemplify how LOD can be used to fill three out of four knowledge containers a CBR system is based on. The paper also presents the applied techniques for the realization and demonstrates the performance gain of knowledge acquisition by the use of LOD
    • …
    corecore