9 research outputs found

    Ontop: answering SPARQL queries over relational databases

    Get PDF
    We present Ontop, an open-source Ontology-Based Data Access (OBDA) system that allows for querying relational data sources through a conceptual representation of the domain of interest, provided in terms of an ontology, to which the data sources are mapped. Key features of Ontop are its solid theoretical foundations, a virtual approach to OBDA, which avoids materializing triples and is implemented through the query rewriting technique, extensive optimizations exploiting all elements of the OBDA architecture, its compliance to all relevant W3C recommendations (including SPARQL queries, R2RML mappings, and OWL2QL and RDFS ontologies), and its support for all major relational databases

    Virtual Knowledge Graphs: An Overview of Systems and Use Cases

    Get PDF
    In this paper, we present the virtual knowledge graph (VKG) paradigm for data integration and access, also known in the literature as Ontology-based Data Access. Instead of structuring the integration layer as a collection of relational tables, the VKG paradigm replaces the rigid structure of tables with the flexibility of graphs that are kept virtual and embed domain knowledge. We explain the main notions of this paradigm, its tooling ecosystem and significant use cases in a wide range of applications. Finally, we discuss future research directions

    Ontology-based data integration in EPNet: Production and distribution of food during the Roman Empire

    Get PDF
    Semantic technologies are rapidly changing the historical research. Over the last decades, an immense amount of new quantifiable data have been accumulated, and made available in interchangeable formats, in social sciences and humanities, opening up new possibilities for solving old questions and posing new ones. This paper introduces a framework that eases the access of scholars to historical and cultural data about food production and commercial trade system during the Roman Empire, distributed across different data sources. The proposed approach relies on the Ontology-Based Data Access (OBDA) paradigm, where the different datasets are virtually integrated by a conceptual layer (an ontology) that provides to the user a clear point of access and a unified and unambiguous conceptual view

    Probabilistic techniques for bridging the semantic gap in schema alignment

    Get PDF
    Connecting pieces of informations from heterogeneous sources sharing the same domain is an open challenge in Semantic Web, Big Data and business communities. The main problem in this research area is to bridge the expressiveness gap between relational databases and ontologies. In general, an ontology is more expressive and captures more semantic information behind data than a relational database does. On the other side, databases are the most common used persistent storage system and they grant benefits such as security and data integrity but they need to be managed by expert users. The problem is quite significant above all when enterprise or corporate ontologies are used to share infomations coming from different databases and where a more efficient data management is auspicable for interoperability purposes. The main motivations on this thesis are related to the database access via ontology, as in the OBDA (Ontology Based Data Access) scenario, wich provides a formal specification of the domain close to the human’s view, while technical details of the database are hidden from end-user, and also the persistent storageof ontologies in databases for facilitating search and retrieval, keeping the benefits of database management systems. In these cases the assertion component (A-Box) is usually stored into a database, and terminological one (T-Box) is mantained in an ontology. So it is more necessary to align schemas than matching instances. The term alignment can be used to define the whole process comprising the mapping process between two existent heterogeneous sources, such as ontology and relational database, and the trasformation process from a representation to the other one, such as ontology-to-database and database-to-ontology. Defining mappings manually is an hard task expecially for large and complex data representations and existing methodologies fail in loosing some contents and several elements are left unaligned. In this thesis are discussed various aspects of the alignment in all these senses. The presented techniques are based on a probabilistic approach that fits well on the uncertain alignment process, where are involved two different representations with a different level of expressiveness. In the methodology ontologies and databases are described in terms of Ontology Web Language (OWL) and Entity-Relationship Diagram (ERD) lexical descriptions. So, the ontologies are represented by a set of OWL axioms while a properly defined Context-Free Grammar (CFG) is used to represent ERDs (Entity-Relationship Diagrams) as a set of sentences. Both the OWL → ERD transformation and the mapping rely on HMMs (Hidden Markov Models) to estimate the most likely sequence of ERD symbols observing OWL symbols. In the model definition OWL constructs are the observable states, while the ERD symbols are the hidden states. The tools developed, one for OWL → ERD transformation purpose, called OMEGA (Ontology → Markov → ERD Generator Application) and one for mapping OWL and ERD, called HOwErd (HMM OWL-ERD) own their own GUI interface for showing the alignment results. Finally, HOwErd is compared with the most widespread tools in the reference literature

    Automatic Geospatial Data Conflation Using Semantic Web Technologies

    Get PDF
    Duplicate geospatial data collections and maintenance are an extensive problem across Australia government organisations. This research examines how Semantic Web technologies can be used to automate the geospatial data conflation process. The research presents a new approach where generation of OWL ontologies based on output data models and presenting geospatial data as RDF triples serve as the basis for the solution and SWRL rules serve as the core to automate the geospatial data conflation processes

    The roman economy. New perspectives

    Get PDF
    Recogemos en este volumen una serie de trabajos dedicados, unos a mostrar algunos aspectos vinculados a la investigación actual, relacionados con el estudio del instrumentum domesticum y la economía romana. Otros son la primera muestra del desarrollo de los nuevos enfoques surgidos del proyecto ERC Advanced Grant Production and Distribution of Food during the Roman Empire: Economic and Political Dynamics (EPNet) (ERC-2013-ADG 340828). Hasta ahora, la aplicación de métodos formales, nacidos fuera del ámbito de la investigación histórica, está poco desarrollada dentro de nuestra especialidad. La “ominosa cuestión” de los estudios de Historia Antigua es la falta de datos. Los modelos interpretativos de la economía antigua han partido siempre de análisis deductivos, que dependen siempre del grado de conocimientos del investigador y de sus apriorismos. A lo largo de estos años hemos conseguido reunir una gran cantidad de datos, muchos de los cuales pueden ser presentados como datos seriales gracias a la información obtenida en el Monte Testaccio. Es ésta una circunstancia, la abundancia de datos y el poder ordenar cronológicamente muchos de ellos, es lo que permite los nuevos enfoques propuestos. En última instancia se trata de confrontar los modelos y explicaciones hasta ahora ofrecidas dentro del ámbito histórico, con modelos formales nacidos dentro del ámbito de las ciencias matemáticas y en el ámbito de las ciencias de redes. Además, estamos haciendo migrar nuestra base de datos CEIPAC, ya puesta en Internet en 1995, a un sistema ontológico de bases de datos, en el que, gracias a un sistema de metadatos, podamos interrelacionar diversas bases de datos que amplien nuestros conocimientos y la capacidad de relacionar multiples aspectos de la investigación. Dado que los trabajos presentados proceden de ámbitos científicos en los que los sistemas de citación son diversos, se han respetado los sistemas propuestos por cada uno de los autores.The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013). ERC grant agreement nº ERC-2013-ADG340828

    Dealing with Inconsistencies and Updates in Description Logic Knowledge Bases

    Get PDF
    The main purpose of an "Ontology-based Information System" (OIS) is to provide an explicit description of the domain of interest, called ontology, and let all the functions of the system be based on such representation, thus freeing the users from the knowledge about the physical repositories where the real data reside. The functionalities that an OIS should provide to the user include both query answering, whose goal is to extract information from the system, and update, whose goal is to modify the information content of the system in order to reflect changes in the domain of interest. The "ontology" is a formal, high quality intentional representation of the domain, designed in such a way to avoid inconsistencies in the modeling of concepts and relationships. On the contrary, the extensional level of the system, constituted by a set of autonomous, heterogeneous data sources, is built independently from the conceptualization represented by the ontology, and therefore may contain information that is incoherent with the ontology itself. This dissertation presents a detailed study on the problem of dealing with inconsistencies in OISs, both in query answering, and in performing updates. We concentrate on the case where the knowledge base in the OISs is expressed in Description Logics, especially the logics of the DL-lite family. As for query answering, we propose both semantical frameworks that are inconsistency-tolerant, and techniques for answering unions of conjunctive queries posed to OISs under such inconsistency-tolerant semantics. As for updates, we present an approach to compute the result of updating a possibly inconsistent OIS with both insertion and deletion of extensional knowledge

    The Mastro Protégé plug-in for OBDA

    No full text
    Ontology-based data access (OBDA) is a recent approach for accessing data where an ontology is connected to autonomous, and generally pre-existing, data repositories through mappings, so as to provide a high-level, conceptual view over such data. Mastro is a Java tool for OBDA developed at Sapienza University of Rome and at the startup OBDA Systems, which is able to manage an OBDA specification where the ontology is specified in DL-Lite. In this work, we present the Mastro plug-in for the popular ontology editor Protégé. By means of this plug-in, the users can specify and manage full OBDA specifications and execute SPARQL queries posed over the ontology level to access data stored in the underlying data sources
    corecore