111 research outputs found

    Dynamic Physiological Partitioning on a Shared-nothing Database Cluster

    Full text link
    Traditional DBMS servers are usually over-provisioned for most of their daily workloads and, because they do not show good-enough energy proportionality, waste a lot of energy while underutilized. A cluster of small (wimpy) servers, where its size can be dynamically adjusted to the current workload, offers better energy characteristics for these workloads. Yet, data migration, necessary to balance utilization among the nodes, is a non-trivial and time-consuming task that may consume the energy saved. For this reason, a sophisticated and easy to adjust partitioning scheme fostering dynamic reorganization is needed. In this paper, we adapt a technique originally created for SMP systems, called physiological partitioning, to distribute data among nodes, that allows to easily repartition data without interrupting transactions. We dynamically partition DB tables based on the nodes' utilization and given energy constraints and compare our approach with physical partitioning and logical partitioning methods. To quantify possible energy saving and its conceivable drawback on query runtimes, we evaluate our implementation on an experimental cluster and compare the results w.r.t. performance and energy consumption. Depending on the workload, we can substantially save energy without sacrificing too much performance

    Access path support for referential integrity

    Get PDF
    Abstract The relational model of data incorporates fundamental assertions for entity integrity and referential integrity. Recently, these so-called relational invariants were more precisely specified by the new SQL2 standard. Accordingly, they have to be guaranteed by a relational DBMS to its users and, therefore, all issues of semantics and implementation became very important. The specification of referential integrity embodies quite a number of complications including the MATCH clause and a collection of referential actions. In particular, MATCH PARTIAL turns out to be hard to understand and, if applied, difficult and expensive to maintain. In this paper, we identify the functional requirements for preserving referential integrity. At a level free of implementational considerations, the number and kinds of searches necessary for referential integrity maintenance are derived. Based on these findings, our investigation is focussed on the question of how the functional requirements can be supported by implementation concepts in an efficient way. We determine the search cost for referential integrity maintenance (in terms of page references) for various possible access path structures. Our main result is that a combined access path structure is the most appropriate for checking the regular MATCH option whereas MATCH PARTIAL requires very expensive and complicated check procedures. If it cannot be avoided at all, the best support is achieved by a combination of multiple B*-trees

    Use of inherent parallelism in database operations

    Get PDF
    Non-standard applications of database systems (e.g. CAD) are characterized by complex objects and powerful user operations. Units of work decomposed from a single user operation are said to allow for inherent semantic parallelism when they do not conflict with each other at the level of decomposition. Hence, they can be scheduled concurrently. In order to support this processing scheme it is necessary to organize parallel execution by adequate control units. Therefore, client-server processes and nested transactions are applied to hierarchically structure the DBS-operations. On the other hand, the DBS-code itself has to be mapped onto a multiprocessor system to take advantage of multiple processing units

    Abbildung von Frames auf neuere Datenmodelle

    Get PDF
    Es wird die Abbildung von Frames mit ihren Modellierungskonzepten und charakteristischen Operationen auf objektorientierte Datenmodelle untersucht, um Wissensrepräsentation in sogenannten Non-Standard-Datenbanksystemen - beispielsweise für Expertensystem-Anwendungen - unterstützen zu können. Nach einem Vergleich der Eigenschaften von Relationenmodell, NF 2-Modell und MAD-Modell für diese Aufgabe wird eine Bewertung der verschiedenen Ansätze vorgenommen, um ihre Tauglichkeit für die Frame-Modellierung deutlicher herauszukristallisieren

    Rules for query rewrite in native XML databases

    Full text link
    In recent years, the database community has seen many sophisticated Structural Join and Holistic Twig Join algo-rithms as well as several index structures supporting the evaluation of twig query patterns. Even though almost all XML query evaluation proposals in the literature use one of those evaluation methods, we believe that (1) there is no internal representation that enables a smooth transition between the XQuery language level and physical algebra operators, and (2) there is still no approach that consid-ers the combination of content-and-structure indexes, Struc-tural Join, and Holistic Twig Join algorithms to speed up the evaluation of twig queries. To overcome this deficit, we propose an enhancement to Starburst’s Query Graph Model as an internal representation for XML query languages such as XQuery. This representation permits the usage of simple (binary) join operators—such as Structural Joins—and com-plex (n-way) join operators—such as Holistic Twig Joins— as part of the logical algebra. For twig queries, we define a set of rewrite rules which initiate query graph transforma-tions towards improved processability, e. g., to fuse adjacent binary join operators to a complex join operator. To en-hance the evaluation flexibility of twig queries, we come up with further rewrite rules to prepare query graphs—even be-fore query transformation—for making the most of existing joins and indexes. 1

    UnterstĂĽtzung der Ablaufsteuerung in Entwurfsumgebungen durch Versionierung und Konfigurierung

    Get PDF
    Eine wesentliche Aufgabe von Entwurfsumgebungen besteht in der Integration von einzelnen, eigenständigen Entwurfswerkzeugen sowohl bzgl. einer gemeinsamen Datenhaltung als auch einer gemeinsamen Ablaufsteuerung. Dies erfordert zum einen die integrierte Verwaltung aller Entwurfsdaten sowie die Bereitstellung der für den werkzeugspezifischen Entwurf relevanten Daten. Versionierung und Konfigurierung stellen hierbei die zentralen Konzepte zur umfassenden Beschreibung der Entwurfsdaten dar. Daneben ist auch die Einbindung der einzelnen Werkzeuganwendungen in den Gesamtablauf zu bewerkstelligen. Hierzu wird eine geeignete Entwurfsablaufteuerung benötigt. Ihre Aufgaben bestehen vor allem in der Unterstützung einer kontrollierten Kooperation zwischen zusammenarbeitenden Entwerfern, in der Koordination ggf. vorgeplanter Folgen von Werkzeuganwendungen sowie in der Sicherung einer korrekten Interaktion der Werkzeuge mit der Entwurfsdatenverwaltung. In diesem Artikel beschreiben wir die zentralen Charakteristika der drei grundlegenden Konzepte: Versionierung, Konfigurierung und Ablaufsteuerung. Weiterhin diskutieren wir das Zusammenwirken dieser Konzepte im Rahmen einer Entwurfsumgebung. Dabei kommt deutlich zum Vorschein, daß die Datenbeschreibungsaspekte auf der einen Seite und die Ablaufaspekte auf der anderen Seite sowie deren Zusammenspiel die Eigenschaften einer konkreten Entwurfsumgebung, wie z.B. die Aspekte des parallelen Entwurfs oder die Fehlerbehandlung, entscheidend mitbestimmen

    Processing and transaction concepts for cooperation of engineering workstations and a database server

    Get PDF
    A DBMS kernel architecture is proposed for improved DB support of engineering applications running on a cluster of workstations. Using such an approach, part of the DBMS code - an application-specific layer - is allocated close to the corresponding application on a workstation while the kernel code is executed on a central server. Emperical performance results from DB-based engineering applications are reported to justify the chosen DBMS architecture. The paper focuses on design issues of the application layer including server coupling, processing model and application interface. Moreover, a transaction model for long-term database work in a coupled workstation-server environment is investigated in detail
    • …
    corecore