4,327 research outputs found

    Structuring the process of integrity maintenance (extended version)

    Get PDF
    Two different approaches have been traditionally considered for dealing with the process of integrity constraints enforcement: integrity checking and integrity maintenance. However, while previous research in the first approach has mainly addressed efficiency issues, research in the second approach has been mainly concentrated in being able to generate all possible repairs that falsify an integrity constraint violation. In this paper we address efficiency issues during the process of integrity maintenance. In this sense, we propose a technique which improves efficiency of existing methods by defining the order in which maintenance of integrity constraints should be performed. Moreover, we use also this technique for being able to handle in an integrated way the integrity constraintsPostprint (published version

    A Review of integrity constraint maintenance and view updating techniques

    Get PDF
    Two interrelated problems may arise when updating a database. On one hand, when an update is applied to the database, integrity constraints may become violated. In such case, the integrity constraint maintenance approach tries to obtain additional updates to keep integrity constraints satisfied. On the other hand, when updates of derived or view facts are requested, a view updating mechanism must be applied to translate the update request into correct updates of the underlying base facts. This survey reviews the research performed on integrity constraint maintenance and view updating. It is proposed a general framework to classify and to compare methods that tackle integrity constraint maintenance and/or view updating. Then, we analyze some of these methods in more detail to identify their actual contribution and the main limitations they may present.Postprint (published version

    Data quality evaluation through data quality rules and data provenance.

    Get PDF
    The application and exploitation of large amounts of data play an ever-increasing role in today’s research, government, and economy. Data understanding and decision making heavily rely on high quality data; therefore, in many different contexts, it is important to assess the quality of a dataset in order to determine if it is suitable to be used for a specific purpose. Moreover, as the access to and the exchange of datasets have become easier and more frequent, and as scientists increasingly use the World Wide Web to share scientific data, there is a growing need to know the provenance of a dataset (i.e., information about the processes and data sources that lead to its creation) in order to evaluate its trustworthiness. In this work, data quality rules and data provenance are used to evaluate the quality of datasets. Concerning the first topic, the applied solution consists in the identification of types of data constraints that can be useful as data quality rules and in the development of a software tool to evaluate a dataset on the basis of a set of rules expressed in the XML markup language. We selected some of the data constraints and dependencies already considered in the data quality field, but we also used order dependencies and existence constraints as quality rules. In addition, we developed some algorithms to discover the types of dependencies used in the tool. To deal with the provenance of data, the Open Provenance Model (OPM) was adopted, an experimental query language for querying OPM graphs stored in a relational database was implemented, and an approach to design OPM graphs was proposed

    Prioritized Repairing and Consistent Query Answering in Relational Databases

    Get PDF
    A consistent query answer in an inconsistent database is an answer obtained in every (minimal) repair. The repairs are obtained by resolving all conflicts in all possible ways. Often, however, the user is able to provide a preference on how conflicts should be resolved. We investigate here the framework of preferred consistent query answers, in which user preferences are used to narrow down the set of repairs to a set of preferred repairs. We axiomatize desirable properties of preferred repairs. We present three different families of preferred repairs and study their mutual relationships. Finally, we investigate the complexity of preferred repairing and computing preferred consistent query answers.Comment: Accepted to the special SUM'08 issue of AMA

    Effect preservation in transaction processing in rule triggering systems

    Get PDF
    Rules provide an expressive means for implementing database behavior: They cope with changes and their ramifications. Rules are commonly used for integrity enforcement, i.e., for repairing database actions in a way that integrity constraints are kept. Yet, Rule Triggering Systems fall short in enforcing effect preservation, i.e., guaranteeing that repairing events do not undo each other, and in particular, do not undo the original triggering event. A method for enforcement of effect preservation on updates in general rule triggering systems is suggested. The method derives transactions from rules, and then splits the work between compile time and run time. At compile time, a data structure is constructed, that analyzes the execution sequences of a transaction and computes minimal conditions for effect preservation. The transaction code is augmented with instructions that navigate along the data structure and test the computed minimal conditions. This method produces minimal effect preserving transactions, and under certain conditions, provides meaningful improvement over the quadratic overhead of pure run time procedures. For transactions without loops, the run time overhead is linear in the size of the transaction, and for general transactions, the run time overhead depends linearly on the length of the execution sequence and the number of loop repetitions. The method is currently being implemented within a traditional database system

    Dynamic Action Scheduling in a Parallel Database System

    Get PDF
    This paper describes a scheduling technique for parallel database systems to obtain high performance, both in terms of response time and throughput. The technique enables both intra- and inter-transaction parallelism while controlling concurrency between transactions correctly. Scheduling is performed dynamically at transaction execution time, taking into account dynamic aspects of the execution and allowing parallelism between the scheduling and transaction execution processes. The technique has a solid conceptual background, based on a simple graph-based approach. The usability and effectiveness of the technique are demonstrated by implementation in and measurements on the parallel PRISMA database system

    Analysing the process of enforcing integrity constraints

    Get PDF
    Two different approaches have been traditionally considered for dealing with the process of integrity constraints enforcement: integrity constraints checking and integrity constraints maintenance. However, while previous research in the first approach has mainly addressed efficiency issues, research in the second approach has been mainly concentrated in being able to generate all possible repairs that falsify an integrity constraint violation. Moreover, the methods proposed up to date are only concerned with handling one of the approaches in an isolated manner, without taking into account the strong relationship between the problems to be solved in both cases. In this paper we address efficiency issues during the process of integrity constraints maintenance. In this sense, we propose a technique which improves efficiency of existing methods by defining the order in which maintenance of integrity constraints should be performed. Moreover, we use also this technique for being able to handle in an integrated way the integrity constraints enforcement approaches mentioned above.Postprint (published version

    A constraint specification approach to building flexible workflows

    Get PDF
    Process support systems, such as workflows, are being used in a variety of domains. However, most areas of application have focused on traditional production-style processes, which are characterised by predictability and repetitiveness. Application in non-traditional domains with highly flexible process is still largely unexplored. Such flexible processes are characterised by lack of ability to completely predefine and/or an explosive number of alternatives. Accordingly we define flexibility as the ability of the process to execute on the basis of a partially defined model where the full specification is made at runtime and may be unique to each instance. In this paper, we will present an approach to building workflow models for such processes. We will present our approach in the context of a non-traditional domain for workflow, deployment, which is, degree programs in tertiary institutes. The primary motivation behind our approach is to provide the ability to model flexible processes without introducing non-standard modelling constructs. This ensures that the correctness and verification of the language is preserved. We propose to build workflow schemas from a standard set of modelling constructs and given process constraints. We identify the fundamental requirements for constraint specification and classify them into selection, termination and build constraints. We will detail the specification of these constraints in a relational model. Finally, we will demonstrate the dynamic building of instance specific workflow models on the basis of these constraints

    FICCS; A Fact Integrity Constraint Checking System

    Get PDF
    corecore