154,277 research outputs found

    Schemata as Building Blocks: Does Size Matter?

    Full text link
    We analyze the schema theorem and the building block hypothesis using a recently derived, exact schemata evolution equation. We derive a new schema theorem based on the concept of effective fitness showing that schemata of higher than average effective fitness receive an exponentially increasing number of trials over time. The building block hypothesis is a natural consequence in that the equation shows how fit schemata are constructed from fit sub-schemata. However, we show that generically there is no preference for short, low-order schemata. In the case where schema reconstruction is favoured over schema destruction large schemata tend to be favoured. As a corollary of the evolution equation we prove Geiringer's theorem. We give supporting numerical evidence for our claims in both non-epsitatic and epistatic landscapes.Comment: 17 pages, 10 postscript figure

    Parallel Processing For Schema Evolution in Database Systems

    Get PDF
    A thesis submitted to the University of London in partial fulfillment of the requirements of the degree of Doctor of Philosoph

    Online Schema Evolution is (Almost) Free for Snapshot Databases

    Full text link
    Modern database applications often change their schemas to keep up with the changing requirements. However, support for online and transactional schema evolution remains challenging in existing database systems. Specifically, prior work often takes ad hoc approaches to schema evolution with 'patches' applied to existing systems, leading to many corner cases and often incomplete functionality. Applications therefore often have to carefully schedule downtimes for schema changes, sacrificing availability. This paper presents Tesseract, a new approach to online and transactional schema evolution without the aforementioned drawbacks. We design Tesseract based on a key observation: in widely used multi-versioned database systems, schema evolution can be modeled as data modification operations that change the entire table, i.e., data-definition-as-modification (DDaM). This allows us to support schema almost 'for free' by leveraging the concurrency control protocol. By simple tweaks to existing snapshot isolation protocols, on a 40-core server we show that under a variety of workloads, Tesseract is able to provide online, transactional schema evolution without service downtime, and retain high application performance when schema evolution is in progress.Comment: To appear at Proceedings of the 2023 International Conference on Very Large Data Bases (VLDB 2023

    Towards a flexible and transparent database evolution

    Get PDF
    Applications refactorings that imply the schema evolution are common activities in programming practices. Although modern object-oriented databases provide transparent schema evolution mechanisms, those refactorings continue to be time consuming tasks for programmers. In this paper we address this problem with a novel approach based on aspect-oriented programming and orthogonal persistence paradigms, as well as our meta-model. An overview of our framework is presented. This framework, a prototype based on that approach, provides applications with aspects of persistence and database evolution. It also provides a new pointcut/advice language that enables the modularization of the instance adaptation crosscutting concern of classes, which were subject to a schema evolution. We also present an application that relies on our framework. This application was developed without any concern regarding persistence and database evolution. However, its data is recovered in each execution, as well as objects, in previous schema versions, remain available, transparently, by means of our framework

    Combining Provenance Management and Schema Evolution

    Get PDF
    The combination of provenance management and schema evolution using the CHASE algorithm is the focus of our research in the area of research data management. The aim is to combine the construc- tion of a CHASE inverse mapping to calculate the minimal part of the original database — the minimal sub-database — with a CHASE-based schema mapping for schema evolution
    corecore