8 research outputs found

    Data Management with Flexible and Extensible Data Schema in CLANS

    Get PDF
    AbstractData Management plays an essential role in both research and industrial areas, especially for the fields need text processing, like business domain. Corporate Leaders Analytics and Network System (CLANS) is a system designed to identify and analyze social networks among corporations and business elites. It targets to tackle some of difficult problems such as natural language processing, network construction, relationship mining, and it requires high-quality management of data. For data management, we propose a novel approach by integrating the essential XML files and auxiliary databases, with a flexible and extensible data schema. This data schema is the kernel of our data management. It achieves plenty of superiorities, namely, separability, scalability, traceability, distinguishability, version control and maintainability. In this paper, we specifically illustrate the data schema as well as the management approach in CLANS

    Relational Cloud: The Case for a Database Service

    Get PDF
    In this paper, we make the case for â databases as a serviceâ (DaaS), with two target scenarios in mind: (i) consolidation of data management functionality for large organizations and (ii) outsourcing data management to a cloud-based service provider for small/medium organizations. We analyze the many challenges to be faced, and discuss the design of a database service we are building, called Relational Cloud. The system has been designed from scratch and combines many recent advances and novel solutions. The prototype we present exploits multiple dedicated storage engines, provides high-availability via transparent replication, supports automatic workload partitioning and live data migration, and provides serializable distributed transactions. While the system is still under active development, we are able to present promising initial results that showcase the key features of our system. The tests are based on TPC benchmarks and real-world data from epinions.com, and show our partitioning, scalability and balancing capabilities

    Towards Online and Transactional Relational Schema Transformations

    Get PDF
    In this paper, we want to draw the attention of the database community to the problem of online schema changes: changing the schema of a database without blocking concurrent transactions. We have identified important classes of relational schema transformations that we want to perform online, and we have identified general requirements for the mechanisms that execute these transformations. Using these requirements, we have developed an experiment based on the standard TPC-C benchmark to assess the behaviour of existing systems. We look at PostgreSQL, which does not support online schema changes; MySQL, which supports basic online schema changes; and pt-online-schema-change, which is a tool for MySQL that uses triggers to implement online schema changes. We found that none of the existing systems fulfill our requirements. In particular, existing non-blocking solutions can not maintain the ACID guarantees when composing schema transformations. This leads to intermediate states being exposed to database programs, which are non-trivial to handle correctly. As a solution direction, we propose lazy schema transformations, which can naturally be composed into complex schema transformations that properly guarantee the ACID properties, and which have minimal impact on concurrent transactions

    A benchmark for online non-blocking schema transformations

    Get PDF
    This paper presents a benchmark for measuring the blocking behavior of schema transformations in relational database systems. As a basis for our benchmark, we have developed criteria for the functionality and performance of schema transformation mechanisms based on the characteristics of state of the art approaches. To address limitations of existing approaches, we assert that schema transformations must be composable while satisfying the ACID guarantees like regular database transactions. Additionally, we have identified important classes of basic and complex relational schema transformations that a schema transformation mechanism should be able to perform. Based on these transformations and our criteria, we have developed a benchmark that extends the standard TPC-C benchmark with schema transformations, which can be used to analyze the blocking behavior of schema transformations in database systems. The goal of the benchmark is not only to evaluate existing solutions for non-blocking schema transformations, but also to challenge the database community to find solutions that allow more complex transactional schema transformations

    A Framework for the Automatic Physical Configuration and Tuning of a Mysql Community Server

    Get PDF
    Manual physical configuration and tuning of database servers, is a complicated task requiring a high level of expertise. Database administrators must consider numerous possibilities, to determine a candidate configuration for implementation. In recent times database vendors have responded to this problem, providing solutions which can automatically configure and tune their products. Poor configuration choices, resulting in performance degradation commonplace in manual configurations, have been significantly reduced in these solutions. However, no such solution exists for MySQL Community Server. This thesis, proposes a novel framework for automatically tuning a MySQL Community Server. A first iteration of the framework has been built and is presented in this paper together with its performance measurements

    Open meta-modelling frameworks via meta-object protocols

    Full text link
    Meta-modelling is central to Model-Driven Engineering. Many meta-modelling notations, approaches and tools have been proposed along the years, which widely vary regarding their supported modelling features. However, current approaches tend to be closed and rigid with respect to the supported concepts and semantics. Moreover, extending the environment with features beyond those natively supported requires highly technical knowledge. This situation hampers flexibility and interoperability of meta-modelling environments. In order to alleviate this situation, we propose open meta-modelling frameworks, which can be extended and configured via meta-object protocols (MOPs). Such environments offer extension points on events like element instantiation, model loading or property access, and enable selecting particular model elements over which the extensions are to be executed. We show how MOP-based mechanisms permit extending meta-modelling frameworks in a flexible way, and allow describing a wide range of meta-modelling concepts. As a proof of concept, we show and compare an implementation in the MetaDepth tool and an aspect-based implementation atop the Eclipse Modelling Framework (EMF). We have evaluated our approach by extending EMF and MetaDepth with modelling services not foreseen initially when they were created. The evaluation shows that MOP-based mechanisms permit extending meta-modelling frameworks in a flexible way, and are powerful enough to support the specification of a broad variety of meta-modelling featuresWork partially funded by projects RECOM and FLEXOR (Spanish MINECO,TIN2015-73968-JIN (AEI/FEDER/UE) and TIN2014-52129-R) and the R&D programme of the Madrid Region (S2013/ICE-3006

    Density-Aware Linear Algebra in a Column-Oriented In-Memory Database System

    Get PDF
    Linear algebra operations appear in nearly every application in advanced analytics, machine learning, and of various science domains. Until today, many data analysts and scientists tend to use statistics software packages or hand-crafted solutions for their analysis. In the era of data deluge, however, the external statistics packages and custom analysis programs that often run on single-workstations are incapable to keep up with the vast increase in data volume and size. In particular, there is an increasing demand of scientists for large scale data manipulation, orchestration, and advanced data management capabilities. These are among the key features of a mature relational database management system (DBMS). With the rise of main memory database systems, it now has become feasible to also consider applications that built up on linear algebra. This thesis presents a deep integration of linear algebra functionality into an in-memory column-oriented database system. In particular, this work shows that it has become feasible to execute linear algebra queries on large data sets directly in a DBMS-integrated engine (LAPEG), without the need of transferring data and being restricted by hard disc latencies. From various application examples that are cited in this work, we deduce a number of requirements that are relevant for a database system that includes linear algebra functionality. Beside the deep integration of matrices and numerical algorithms, these include optimization of expressions, transparent matrix handling, scalability and data-parallelism, and data manipulation capabilities. These requirements are addressed by our linear algebra engine. In particular, the core contributions of this thesis are: firstly, we show that the columnar storage layer of an in-memory DBMS yields an easy adoption of efficient sparse matrix data types and algorithms. Furthermore, we show that the execution of linear algebra expressions significantly benefits from different techniques that are inspired from database technology. In a novel way, we implemented several of these optimization strategies in LAPEG’s optimizer (SpMachO), which uses an advanced density estimation method (SpProdest) to predict the matrix density of intermediate results. Moreover, we present an adaptive matrix data type AT Matrix to obviate the need of scientists for selecting appropriate matrix representations. The tiled substructure of AT Matrix is exploited by our matrix multiplication to saturate the different sockets of a multicore main-memory platform, reaching up to a speed-up of 6x compared to alternative approaches. Finally, a major part of this thesis is devoted to the topic of data manipulation; where we propose a matrix manipulation API and present different mutable matrix types to enable fast insertions and deletes. We finally conclude that our linear algebra engine is well-suited to process dynamic, large matrix workloads in an optimized way. In particular, the DBMS-integrated LAPEG is filling the linear algebra gap, and makes columnar in-memory DBMS attractive as efficient, scalable ad-hoc analysis platform for scientists
    corecore