65 research outputs found

    Just-in-time Analytics Over Heterogeneous Data and Hardware

    Get PDF
    Industry and academia are continuously becoming more data-driven and data-intensive, relying on the analysis of a wide variety of datasets to gain insights. At the same time, data variety increases continuously across multiple axes. First, data comes in multiple formats, such as the binary tabular data of a DBMS, raw textual files, and domain-specific formats. Second, different datasets follow different data models, such as the relational and the hierarchical one. Data location also varies: Some datasets reside in a central "data lake", whereas others lie in remote data sources. In addition, users execute widely different analysis tasks over all these data types. Finally, the process of gathering and integrating diverse datasets introduces several inconsistencies and redundancies in the data, such as duplicate entries for the same real-world concept. In summary, heterogeneity significantly affects the way data analysis is performed. In this thesis, we aim for data virtualization: Abstracting data out of its original form and manipulating it regardless of the way it is stored or structured, without a performance penalty. To achieve data virtualization, we design and implement systems that i) mask heterogeneity through the use of heterogeneity-aware, high-level building blocks and ii) offer fast responses through on-demand adaptation techniques. Regarding the high-level building blocks, we use a query language and algebra to handle multiple collection types, such as relations and hierarchies, express transformations between these collection types, as well as express complex data cleaning tasks over them. In addition, we design a location-aware compiler and optimizer that masks away the complexity of accessing multiple remote data sources. Regarding on-demand adaptation, we present a design to produce a new system per query. The design uses customization mechanisms that trigger runtime code generation to mimic the system most appropriate to answer a query fast: Query operators are thus created based on the query workload and the underlying data models; the data access layer is created based on the underlying data formats. In addition, we exploit emerging hardware by customizing the system implementation based on the available heterogeneous processors â CPUs and GPGPUs. We thus pair each workload with its ideal processor type. The end result is a just-in-time database system that is specific to the query, data, workload, and hardware instance. This thesis redesigns the data management stack to natively cater for data heterogeneity and exploit hardware heterogeneity. Instead of centralizing all relevant datasets, converting them to a single representation, and loading them in a monolithic, static, suboptimal system, our design embraces heterogeneity. Overall, our design decouples the type of performed analysis from the original data layout; users can perform their analysis across data stores, data models, and data formats, but at the same time experience the performance offered by a custom system that has been built on demand to serve their specific use case

    Database Principles and Technologies – Based on Huawei GaussDB

    Get PDF
    This open access book contains eight chapters that deal with database technologies, including the development history of database, database fundamentals, introduction to SQL syntax, classification of SQL syntax, database security fundamentals, database development environment, database design fundamentals, and the application of Huawei’s cloud database product GaussDB database. This book can be used as a textbook for database courses in colleges and universities, and is also suitable as a reference book for the HCIA-GaussDB V1.5 certification examination. The Huawei GaussDB (for MySQL) used in the book is a Huawei cloud-based high-performance, highly applicable relational database that fully supports the syntax and functionality of the open source database MySQL. All the experiments in this book can be run on this database platform. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud computing, and smart computing to artificial intelligence

    Database Principles and Technologies – Based on Huawei GaussDB

    Get PDF
    This open access book contains eight chapters that deal with database technologies, including the development history of database, database fundamentals, introduction to SQL syntax, classification of SQL syntax, database security fundamentals, database development environment, database design fundamentals, and the application of Huawei’s cloud database product GaussDB database. This book can be used as a textbook for database courses in colleges and universities, and is also suitable as a reference book for the HCIA-GaussDB V1.5 certification examination. The Huawei GaussDB (for MySQL) used in the book is a Huawei cloud-based high-performance, highly applicable relational database that fully supports the syntax and functionality of the open source database MySQL. All the experiments in this book can be run on this database platform. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud computing, and smart computing to artificial intelligence

    A Monte Carlo tree search algorithm for optimization of load scalability in database systems

    Get PDF
    A thesis submitted in partial fulfilment of the requirements for the Degree of Doctor of Philosophy in Information Technology at Strathmore UniversityVariable environmental conditions and runtime phenomena require developers of complex business information systems to expose configuration parameters to system administrators. This allows system administrators to intervene by tuning the bottleneck configuration parameters in response to current changes or in anticipation of future changes in order to maintain the system’s performance at an optimum level. However, these manual performance tuning interventions are prone to error and lack of standards due to varying levels of expertise and over-reliance on inaccurate predictions of future states of a business information system. The purpose of this research was therefore to investigate on how to design an algorithm that proactively reconfigures bottleneck parameters without over-relying on an accurate model of a stochastic environment. This was done using a comparative experimental research design that involved quantitative data collection through simulations of different algorithm variants. The research built on the theoretical concepts of control theory and decision theory, coupled with the estimation of unknown quantities using principles of simulation-based inferential statistics. Subsequently, Monte Carlo Tree Search, with a variant of the selection stage, was used as the foundation of the designed algorithm. The selection stage was variated by applying a “lean Last Good Reply with Forgetting” (lean-LGRF) strategy and first tested in the context of a strategy board game, Reversi. The lean-LGRF selection strategy applied over 1,000 playouts against the baseline Upper Confidence Bound applied to Trees (UCT) selection strategy recorded the highest number of wins. On the other hand, the Progressive Bias selection strategy had a win-rate of 45.8% against the UCT selection strategy. Lastly, as expected, the UCT selection strategy had a win-rate of 49.7% (an almost 50-50 win-rate) against itself. The results were then subjected to a Chi-square (χ2) test which provided evidence that the variation technique applied in the selection stage of the algorithm had a significantly positive impact on its performance. The superior selection variant was then applied in the context of a distributed database system. This also provided compelling results that indicate that applying the algorithm in a distributed database system resulted in a response-time latency that was 27% lower than the average response-time latency and a transaction throughput that was 17% higher than the average transaction throughput

    INVESTIGATING THE STAR SCHEMA BENCHMARK AS A REPLACEMENT FOR THE TPC-H DECISION SUPPORT SYSTEM

    Get PDF
    Decision Support System (DSS) are at the core of business intelligence systems. Implementation costs for enterprise level Database Management System (DBMS) and DSS average $10,461 for installation costs. This does not include costs associated with database migrations or testing, which can double the cost, nor does this quoted price include the cost of yearly licensing or support agreements. Depending on the software vendor, there may be additional costs associated with using an application cluster, logical and virtual partitioning, data guards, and even costs per processor core. It is easy to see how the cost of operating a database server can grow expensive rapidly. Information Technology (IT) decision makers and software architects need the ability to choose a DBMS to suit their application's needs. To choose the correct DBMS solution a comprehensive and adaptive benchmark is needed. This benchmark must be capable of predicting how the performance of a given system will scale, as well as offer an estimation of cost. A problematic benchmark that is unable to accurately predict these values is worthless and leads to costly software decision mistakes. To continue to be successful and remain competitive in a given industry it is important for organizations to know their customers, target and acquire new markets, and look to future trends. This is where database business intelligence and decision support systems become useful. DSS allow users to data mine critical information about their work-flows, sales history and trends and have the data readily available so that they may make informed decisions and plan future growth. Business intelligence tools and decision support systems provide executive officers and members of management, the tools needed to create complex ad-hoc queries and mine important data. Presently, IT decision makers and software engineers use the TPC-H decision sup- port system benchmark as a guide to determining the optimal hardware and database vendor configurations to utilize for their decision support system. The TPC-H benchmark is a popular decision support system benchmark. In recent years, however, TPC-H has become heavily criticized for its many problems. The issues outlined within this thesis can lead IT decision makers to purchase and implement improper hardware and software solutions. This thesis examines the criticisms and issues of the TPC-H benchmark. Utilizing Amazon Web Services cloud computing power, we evaluate the Star Schema Benchmark (SSB), as an alternative to TPC-H. We successfully identify and demonstrate several previously undefined problems in the TPC-H benchmark. Our results conclude that the SSB not only resolves the issues inherent in TPC-H, and should serve as a replacement for TPC-H

    Workload-Aware Database Monitoring and Consolidation

    Get PDF
    In most enterprises, databases are deployed on dedicated database servers. Often, these servers are underutilized much of the time. For example, in traces from almost 200 production servers from different organizations, we see an average CPU utilization of less than 4%. This unused capacity can be potentially harnessed to consolidate multiple databases on fewer machines, reducing hardware and operational costs. Virtual machine (VM) technology is one popular way to approach this problem. However, as we demonstrate in this paper, VMs fail to adequately support database consolidation, because databases place a unique and challenging set of demands on hardware resources, which are not well-suited to the assumptions made by VM-based consolidation. Instead, our system for database consolidation, named Kairos, uses novel techniques to measure the hardware requirements of database workloads, as well as models to predict the combined resource utilization of those workloads. We formalize the consolidation problem as a non-linear optimization program, aiming to minimize the number of servers and balance load, while achieving near-zero performance degradation. We compare Kairos against virtual machines, showing up to a factor of 12Ă— higher throughput on a TPC-C-like benchmark. We also tested the effectiveness of our approach on real-world data collected from production servers at Wikia.com, Wikipedia, Second Life, and MIT CSAIL, showing absolute consolidation ratios ranging between 5.5:1 and 17:1

    Flexibility in Data Management

    Get PDF
    With the ongoing expansion of information technology, new fields of application requiring data management emerge virtually every day. In our knowledge culture increasing amounts of data and work force organized in more creativity-oriented ways also radically change traditional fields of application and question established assumptions about data management. For instance, investigative analytics and agile software development move towards a very agile and flexible handling of data. As the primary facilitators of data management, database systems have to reflect and support these developments. However, traditional database management technology, in particular relational database systems, is built on assumptions of relatively stable application domains. The need to model all data up front in a prescriptive database schema earned relational database management systems the reputation among developers of being inflexible, dated, and cumbersome to work with. Nevertheless, relational systems still dominate the database market. They are a proven, standardized, and interoperable technology, well-known in IT departments with a work force of experienced and trained developers and administrators. This thesis aims at resolving the growing contradiction between the popularity and omnipresence of relational systems in companies and their increasingly bad reputation among developers. It adapts relational database technology towards more agility and flexibility. We envision a descriptive schema-comes-second relational database system, which is entity-oriented instead of schema-oriented; descriptive rather than prescriptive. The thesis provides four main contributions: (1)~a flexible relational data model, which frees relational data management from having a prescriptive schema; (2)~autonomous physical entity domains, which partition self-descriptive data according to their schema properties for better query performance; (3)~a freely adjustable storage engine, which allows adapting the physical data layout used to properties of the data and of the workload; and (4)~a self-managed indexing infrastructure, which autonomously collects and adapts index information under the presence of dynamic workloads and evolving schemas. The flexible relational data model is the thesis\' central contribution. It describes the functional appearance of the descriptive schema-comes-second relational database system. The other three contributions improve components in the architecture of database management systems to increase the query performance and the manageability of descriptive schema-comes-second relational database systems. We are confident that these four contributions can help paving the way to a more flexible future for relational database management technology

    Development of new data partitioning and allocation algorithms for query optimization of distributed data warehouse systems

    Get PDF
    Distributed databases and in particular distributed data warehousing are becoming an increasingly important technology for information integration and data analysis. Data Warehouse (DW) systems are used by decision makers for performance measurement and decision support. However, although data warehousing and on-line analytical processing (OLAP) are essential elements of decision support, the OLAP query response time is strongly affected by the volume of data need to be accessed from storage disks. Data partitioning is one of the physical design techniques that may be used to optimize query processing cost in DWs. It is a non redundant optimization technique because it does not replicate data, contrary to redundant techniques like materialized views and indexes. The warehouse partitioning problem is concerned with determining the set of dimension tables to be partitioned and using them to generate the fact table fragments. In this work an enhanced grouping algorithm that avoids the limitations of some existing vertical partitioning algorithms is proposed. Furthermore, a static partitioning algorithm that allows fragmentation at early stages of schema design is presented. The thesis also, investigates the performance of the data warehouse after implementing a combination of Genetic Algorithm (GA) and Simulated Annealing (SA) techniques to horizontally partition the data warehouse star schema. It, then presents the experimentation and implementation results of the proposed algorithm. This research presented different approaches to optimize data fragments allocation cost using a greedy mathematical model and a combination of simulated annealing and genetic algorithm to determine the site by site allocation leading to optimal solutions for fragments distribution. Throughout this thesis, the term fragmentation and partitioning will be used interchangeably
    • …
    corecore