1,283 research outputs found

    Semantic Query Optimisation with Ontology Simulation

    Full text link
    Semantic Web is, without a doubt, gaining momentum in both industry and academia. The word "Semantic" refers to "meaning" - a semantic web is a web of meaning. In this fast changing and result oriented practical world, gone are the days where an individual had to struggle for finding information on the Internet where knowledge management was the major issue. The semantic web has a vision of linking, integrating and analysing data from various data sources and forming a new information stream, hence a web of databases connected with each other and machines interacting with other machines to yield results which are user oriented and accurate. With the emergence of Semantic Web framework the na\"ive approach of searching information on the syntactic web is clich\'e. This paper proposes an optimised semantic searching of keywords exemplified by simulation an ontology of Indian universities with a proposed algorithm which ramifies the effective semantic retrieval of information which is easy to access and time saving

    Grid Metadata Lifetime Control in ActOn

    Get PDF
    In the Semantic Grid, metadata, as first class citizens, should be maintained up to-date in a cost-effective manner. This includes maxi missing the automation of different aspects of the metadata lifecycle, managing the evolution and change of metadata in distributed contexts, and synchronizing adequately the evolution of all these related entities. In this paper, we introduce a semantic model and its operations which is designed for supporting dynamic metadata management in Active Ontology (Act On), a semantic information integration approach for highly dynamic information sources. Finally, we illustrate the Act On-based metadata lifetime control by EGEE examples

    Assessing and refining mappings to RDF to improve dataset quality

    Get PDF
    RDF dataset quality assessment is currently performed primarily after data is published. However, there is neither a systematic way to incorporate its results into the dataset nor the assessment into the publishing workflow. Adjustments are manually -but rarely- applied. Nevertheless, the root of the violations which often derive from the mappings that specify how the RDF dataset will be generated, is not identified. We suggest an incremental, iterative and uniform validation workflow for RDF datasets stemming originally from (semi-) structured data (e.g., CSV, XML, JSON). In this work, we focus on assessing and improving their mappings. We incorporate (i) a test-driven approach for assessing the mappings instead of the RDF dataset itself, as mappings reflect how the dataset will be formed when generated; and (ii) perform semi-automatic mapping refinements based on the results of the quality assessment. The proposed workflow is applied to diverse cases, e.g., large, crowdsourced datasets such as DBpedia, or newly generated, such as iLastic. Our evaluation indicates the efficiency of our workflow, as it significantly improves the overall quality of an RDF dataset in the observed cases

    Ontology of core data mining entities

    Get PDF
    In this article, we present OntoDM-core, an ontology of core data mining entities. OntoDM-core defines themost essential datamining entities in a three-layered ontological structure comprising of a specification, an implementation and an application layer. It provides a representational framework for the description of mining structured data, and in addition provides taxonomies of datasets, data mining tasks, generalizations, data mining algorithms and constraints, based on the type of data. OntoDM-core is designed to support a wide range of applications/use cases, such as semantic annotation of data mining algorithms, datasets and results; annotation of QSAR studies in the context of drug discovery investigations; and disambiguation of terms in text mining. The ontology has been thoroughly assessed following the practices in ontology engineering, is fully interoperable with many domain resources and is easy to extend

    Using ontologies to synchronize change in relational database systems

    Get PDF
    Ontology is a building block of the semantic Web. Ontology building requires a detailed domain analysis, which in turn requires financial resources, intensive domain knowledge and time. Domain models in industry are frequently stored as relational database schemas in relational databases. An ontology base underlying such schemas can represent concepts and relationships that are present in the domain of discourse. However, with ever increasing demand for wider access and domain coverage, public databases are not static and their schemas evolve over time. Ontologies generated according to these databases have to change to reflect the new situation. Once a database schema is changed, these changes in the schema should also be incorporated in any ontology generated from the database. It is not possible to generate a fresh version of the ontology using the new database schema because the ontology itself may have undergone changes that need to be preserved. To tackle this problem, this paper presents a generic framework that will help to generate and synchronize ontologies with existing data sources. In particular we address the translation between ontologies and database schemas, but our proposal is also sufficiently generic to be used to generate and maintain ontologies based on XML and object oriented databases

    TOQL: Temporal Ontology Querying Language

    Full text link
    corecore