790 research outputs found

    Composition and Inversion of Schema Mappings

    Full text link
    In the recent years, a lot of attention has been paid to the development of solid foundations for the composition and inversion of schema mappings. In this paper, we review the proposals for the semantics of these crucial operators. For each of these proposals, we concentrate on the three following problems: the definition of the semantics of the operator, the language needed to express the operator, and the algorithmic issues associated to the problem of computing the operator. It should be pointed out that we primarily consider the formalization of schema mappings introduced in the work on data exchange. In particular, when studying the problem of computing the composition and inverse of a schema mapping, we will be mostly interested in computing these operators for mappings specified by source-to-target tuple-generating dependencies

    Enhanced Inversion of Schema Evolution with Provenance

    Full text link
    Long-term data-driven studies have become indispensable in many areas of science. Often, the data formats, structures and semantics of data change over time, the data sets evolve. Therefore, studies over several decades in particular have to consider changing database schemas. The evolution of these databases lead at some point to a large number of schemas, which have to be stored and managed, costly and time-consuming. However, in the sense of reproducibility of research data each database version must be reconstructable with little effort. So a previously published result can be validated and reproduced at any time. Nevertheless, in many cases, such an evolution can not be fully reconstructed. This article classifies the 15 most frequently used schema modification operators and defines the associated inverses for each operation. For avoiding an information loss, it furthermore defines which additional provenance information have to be stored. We define four classes dealing with dangling tuples, duplicates and provenance-invariant operators. Each class will be presented by one representative. By using and extending the theory of schema mappings and their inverses for queries, data analysis, why-provenance, and schema evolution, we are able to combine data analysis applications with provenance under evolving database structures, in order to enable the reproducibility of scientific results over longer periods of time. While most of the inverses of schema mappings used for analysis or evolution are not exact, but only quasi-inverses, adding provenance information enables us to reconstruct a sub-database of research data that is sufficient to guarantee reproducibility

    PowerAqua: fishing the semantic web

    Get PDF
    The Semantic Web (SW) offers an opportunity to develop novel, sophisticated forms of question answering (QA). Specifically, the availability of distributed semantic markup on a large scale opens the way to QA systems which can make use of such semantic information to provide precise, formally derived answers to questions. At the same time the distributed, heterogeneous, large-scale nature of the semantic information introduces significant challenges. In this paper we describe the design of a QA system, PowerAqua, designed to exploit semantic markup on the web to provide answers to questions posed in natural language. PowerAqua does not assume that the user has any prior information about the semantic resources. The system takes as input a natural language query, translates it into a set of logical queries, which are then answered by consulting and aggregating information derived from multiple heterogeneous semantic sources

    Overcoming database heterogeneity to facilitate social networks: the Colombian displaced population as a case study

    Get PDF
    In this paper we describe a two-step approach for the publication of data about displaced people in Colombia, whose lack of homogeneity represents a major barrier for the application of adequate policies. This data is available in heterogeneous data sources, mainly relational, and is not connected to social networking sites. Our approach consists in a first step where ontologies are automatically derived from existing relational databases, exploiting the semantics underlying the SQL-DDL schema description, and a second step where these ontologies are aligned with existing ontologies (FOAF in our example), facilitating a better integration of data coming from multiple sources

    Reconciling Equational Heterogeneity within a Data Federation

    Get PDF
    Mappings in most federated databases are conceptualized and implemented as black-box transformations between source schemas and a federated schema. This approach does not allow specific mappings to be declared once and reused in other situations. We present an alternative approach, in which data-level mappings are represented independent of source and federated schemas as a network between “contexts”. This compendious representation expedites the data federation process via mapping reuse and automated mapping composition from simpler mappings. We illustrate the benefits of mapping reuse and composition by using an example that incorporates equational mappings and the application of symbolic equation solving techniques

    A semantic data dictionary method for database schema integration in CIESIN

    Full text link
    CIESIN (Consortium for International Earth Science Information Network) is funded by NASA to investigate the technology necessary to integrate and facilitate the interdisciplinary use of Global Change information. A clear of this mission includes providing a link between the various global change data sets, in particular the physical sciences and the human (social) sciences.The typical scientist using the CIESIN system will want to know how phenomena in an outside field affects his/her work. For example, a medical researcher might ask: how does air‐quality effect emphysema? This and many similar questions will require sophisticated semantic data integration. The researcher who raised the question may be familiar with medical data sets containing emphysema occurrences. But this same investigator may know little, if anything, about the existance or location of air‐quality data. It is easy to envision a system which would allow that investigator to locate and perform a ‘‘join’’ on two data sets, one containing emphysema cases and the other containing air‐quality levels. No such system exists today.One major obstacle to providing such a system will be overcoming the heterogeneity which falls into two broad categories. ‘‘Database system’’ heterogeneity involves differences in data models and packages. ‘‘Data semantic’’ heterogeneity involves differences in terminology between disciplines which translates into data semantic issues, and varying levels of data refinement, from raw to summary.Our work investigates a global data dictionary mechanism to facilitate a merged data service. Specially, we propose using a semantic tree during schema definition to aid in locating and integrating heterogeneous databases.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/87323/2/139_1.pd

    Integrated data model and DSL modifications

    Get PDF
    Companies are increasingly more and more dependent on distributed web-based software systems to support their businesses. This increases the need to maintain and extend software systems with up-to-date new features. Thus, the development process to introduce new features usually needs to be swift and agile, and the supporting software evolution process needs to be safe, fast, and efficient. However, this is usually a difficult and challenging task for a developer due to the lack of support offered by programming environments, frameworks, and database management systems. Changes needed at the code level, database model, and the actual data contained in the database must be planned and developed together and executed in a synchronized way. Even under a careful development discipline, the impact of changing an application data model is hard to predict. The lifetime of an application comprises changes and updates designed and tested using data, which is usually far from the real, production, data. So, coding DDL and DML SQL scripts to update database schema and data, is the usual (and hard) approach taken by developers. Such manual approach is error prone and disconnected from the real data in production, because developers may not know the exact impact of their changes. This work aims to improve the maintenance process in the context of Agile Platform by Outsystems. Our goal is to design and implement new data-model evolution features that ensure a safe support for change and a sound migration process. Our solution includes impact analysis mechanisms targeting the data model and the data itself. This provides, to developers, a safe, simple, and guided evolution process
    • 

    corecore