966 research outputs found

    Ontology of core data mining entities

    Get PDF
    In this article, we present OntoDM-core, an ontology of core data mining entities. OntoDM-core defines themost essential datamining entities in a three-layered ontological structure comprising of a specification, an implementation and an application layer. It provides a representational framework for the description of mining structured data, and in addition provides taxonomies of datasets, data mining tasks, generalizations, data mining algorithms and constraints, based on the type of data. OntoDM-core is designed to support a wide range of applications/use cases, such as semantic annotation of data mining algorithms, datasets and results; annotation of QSAR studies in the context of drug discovery investigations; and disambiguation of terms in text mining. The ontology has been thoroughly assessed following the practices in ontology engineering, is fully interoperable with many domain resources and is easy to extend

    Evaluating the quality of linked open data in digital libraries

    Get PDF
    Cultural heritage institutions have recently started to share their metadata as Linked Open Data (LOD) in order to disseminate and enrich them. The publication of large bibliographic data sets as LOD is a challenge that requires the design and implementation of custom methods for the transformation, management, querying and enrichment of the data. In this report, the methodology defined by previous research for the evaluation of the quality of LOD is analysed and adapted to the specific case of Resource Description Framework (RDF) triples containing standard bibliographic information. The specified quality measures are reported in the case of four highly relevant libraries.This work has been partially supported by the ECLIPSE-UA RTI2018-094283-B-C32 (Spanish Ministry of Education and Science)

    Hypermedia-based discovery for source selection using low-cost linked data interfaces

    Get PDF
    Evaluating federated Linked Data queries requires consulting multiple sources on the Web. Before a client can execute queries, it must discover data sources, and determine which ones are relevant. Federated query execution research focuses on the actual execution, while data source discovery is often marginally discussed-even though it has a strong impact on selecting sources that contribute to the query results. Therefore, the authors introduce a discovery approach for Linked Data interfaces based on hypermedia links and controls, and apply it to federated query execution with Triple Pattern Fragments. In addition, the authors identify quantitative metrics to evaluate this discovery approach. This article describes generic evaluation measures and results for their concrete approach. With low-cost data summaries as seed, interfaces to eight large real-world datasets can discover each other within 7 minutes. Hypermedia-based client-side querying shows a promising gain of up to 50% in execution time, but demands algorithms that visit a higher number of interfaces to improve result completeness

    Semantic Systems. The Power of AI and Knowledge Graphs

    Get PDF
    This open access book constitutes the refereed proceedings of the 15th International Conference on Semantic Systems, SEMANTiCS 2019, held in Karlsruhe, Germany, in September 2019. The 20 full papers and 8 short papers presented in this volume were carefully reviewed and selected from 88 submissions. They cover topics such as: web semantics and linked (open) data; machine learning and deep learning techniques; semantic information management and knowledge integration; terminology, thesaurus and ontology management; data mining and knowledge discovery; semantics in blockchain and distributed ledger technologies

    A Creative Data Ontology for the Moving Image Industry

    Get PDF
    The moving image industry produces an extremely large amount of data and associated metadata for each media creation project, often in the range of terabytes. The current methods used to organise, track, and retrieve the metadata are inadequate, with metadata often being hard to find. The aim of this thesis is to explore whether there is a practical use case for using ontologies to manage metadata in the moving image industry and to determine whether an ontology can be designed for such a purpose and can be used to manage metadata more efficiently to improve workflows. It presents a domain ontology, hereby referred to as the Creative Data Ontology, engineered around a set of metadata fields provided by Evolutions, Double Negative (DNEG), and Pinewood Studios, and four use cases. The Creative Data Ontology is then evaluated using both quantitative methods and qualitative methods (via interviews) with domain and ontology experts.Our findings suggest that there is a practical use case for an ontology-based metadata management solution in the moving image industry. However, it would need to be presented carefully to non-technical users, such as domain experts, as they are likely to experience a steep learning curve. The Creative Data Ontology itself meets the criteria for a high-quality ontology for the sub-sectors of the moving image industry domain that it provides coverage for (i.e. scripted film and television, visual effects, and unscripted television) and it provides a good foundation for expanding into other sub-sectors of the industry, although it cannot yet be considered a ``standard'' ontology. Finally, the thesis presents the methodological process taken to develop the Creative Data Ontology and the lessons learned during the ontology engineering process which can be valuable guidance for designers and developers of future metadata ontologies. We believe such guidance could be transferable across many domains where an ontology of metadata is required, which are unrelated to the moving image industry. Future research may focus on assisting non-technical users to overcome the learning curve, which may also also applicable to other domains that may choose to use ontologies in the future

    MSLE: An ontology for Materials Science Laboratory Equipment. Large-Scale Devices for Materials Characterization

    Full text link
    This paper introduces a new ontology for Materials Science Laboratory Equipment, termed MSLE. A fundamental issue with materials science laboratory (hereafter lab) equipment in the real world is that scientists work with various types of equipment with multiple specifications. For example, there are many electron microscopes with different parameters in chemical and physical labs. A critical development to unify the description is to build an equipment domain ontology as basic semantic knowledge and to guide the user to work with the equipment appropriately. Here, we propose to develop a consistent ontology for equipment, the MSLE ontology. In the MSLE, two main existing ontologies, the Semantic Sensor Network (SSN) and the Material Vocabulary (MatVoc), have been integrated into the MSLE core to build a coherent ontology. Since various acronyms and terms have been used for equipment, this paper proposes an approach to use a Simple Knowledge Organization System (SKOS) to represent the hierarchical structure of equipment terms. Equipment terms were collected in various languages and abbreviations and coded into the MSLE using the SKOS model. The ontology development was conducted in close collaboration with domain experts and focused on the large-scale devices for materials characterization available in our research group. Competency questions are expected to be addressed through the MSLE ontology. Constraints are modeled in the Shapes Query Language (SHACL); a prototype is shown and validated to show the value of the modeling constraints.Comment: Submitted to Materials Today Communicatio

    Negative Statements Considered Useful

    No full text
    Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, while they abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In query-log-based text extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.1M statements for 100K popular Wikidata entities
    corecore