12 research outputs found

    A survey of big data research

    Get PDF
    Big data create values for business and research, but pose significant challenges in terms of networking, storage, management, analytics, and ethics. Multidisciplinary collaborations from engineers, computer scientists, statisticians, and social scientists are needed to tackle, discover, and understand big data. This survey presents an overview of big data initiatives, technologies, and research in industries and academia, and discusses challenges and potential solutions

    Building Efficient Query Engines in a High-Level Language

    Get PDF
    Abstraction without regret refers to the vision of using high-level programming languages for systems development without experiencing a negative impact on performance. A database system designed according to this vision offers both increased productivity and high performance, instead of sacrificing the former for the latter as is the case with existing, monolithic implementations that are hard to maintain and extend. In this article, we realize this vision in the domain of analytical query processing. We present LegoBase, a query engine written in the high-level language Scala. The key technique to regain efficiency is to apply generative programming: LegoBase performs source-to-source compilation and optimizes the entire query engine by converting the high-level Scala code to specialized, low-level C code. We show how generative programming allows to easily implement a wide spectrum of optimizations, such as introducing data partitioning or switching from a row to a column data layout, which are difficult to achieve with existing low-level query compilers that handle only queries. We demonstrate that sufficiently powerful abstractions are essential for dealing with the complexity of the optimization effort, shielding developers from compiler internals and decoupling individual optimizations from each other. We evaluate our approach with the TPC-H benchmark and show that: (a) With all optimizations enabled, LegoBase significantly outperforms a commercial database and an existing query compiler. (b) Programmers need to provide just a few hundred lines of high-level code for implementing the optimizations, instead of complicated low-level code that is required by existing query compilation approaches. (c) The compilation overhead is low compared to the overall execution time, thus making our approach usable in practice for compiling query engines

    Adaptive Query Processing on RAW Data

    Get PDF
    Database systems deliver impressive performance for large classes of workloads as the result of decades of research into optimizing database engines. High performance, however, is achieved at the cost of versatility. In particular, database systems only operate efficiently over loaded data, i.e., data converted from its original raw format into the system’s internal data format. At the same time, data volume continues to increase exponentially and data varies increasingly, with an escalating number of new formats. The consequence is a growing impedance mismatch between the original structures holding the data in the raw files and the structures used by query engines for efficient processing. In an ideal scenario, the query engine would seamlessly adapt itself to the data and ensure efficient query processing regardless of the input data formats, optimizing itself to each instance of a file and of a query by leveraging information available at query time. Today’s systems, however, force data to adapt to the query engine during data loading. This paper proposes adapting the query engine to the formats of raw data. It presents RAW, a prototype query engine which enables querying heterogeneous data sources transparently. RAW employs Just-In-Time access paths, which efficiently couple heterogeneous raw files to the query engine and reduce the overhead of traditional general-purpose scan operators. There are, however, inherent overheads with accessing raw data directly that cannot be eliminated, such as converting the raw values. Therefore, RAW also uses column shreds, ensuring that we pay these costs only for the subsets of raw data strictly needed by a query. We use RAW in a real-world scenario and achieve a two-order of magnitude speedup against the existing hand-written solution

    How to Architect a Query Compiler

    Get PDF
    This paper studies architecting query compilers. The state of the art in query compiler construction is lagging behind that in the compilers field. We attempt to remedy this by exploring the key causes of technical challenges in need of well founded solutions, and by gathering the most relevant ideas and approaches from the PL and compilers communities for easy digestion by database researchers. All query compilers known to us are more or less monolithic template expanders that do the bulk of the compilation task in one large leap. Such systems are hard to build and maintain. We propose to use a stack of multiple DSLs on different levels of abstraction with lowering in multiple steps to make query compilers easier to build and extend, ultimately allowing us to create more convincing and sustainable compiler-based data management systems. We attempt to derive our advice for creating such DSL stacks from widely acceptable principles. We have also re-created a well-known query compiler following these ideas and report on this effort

    Building Efficient Query Engines using High-Level Languages

    Get PDF
    We are currently witnessing a shift towards the use of high-level programming languages for systems development. These approaches collide with the traditional wisdom which calls for using low-level languages for building efficient software systems. This shift is necessary as billions of dollars are spent annually on the maintenance and debugging of performance-critical software. High-level languages promise faster development of higher-quality software; by offering advanced software features, they help to reduce the number of software errors of the systems and facilitate their verification. Despite these benefits, database systems development seems to be lagging behind as DBMSes are still written in low-level languages. The reason is that the increased productivity offered by high-level languages comes at the cost of a pronounced negative performance impact. In this thesis, we argue that it is now time for a radical rethinking of how database systems are designed. We show that, by using high-level languages, it is indeed possible to build databases that allow for both productivity and high performance. More concretely, in this thesis we follow this abstraction without regret vision and use high-level languages to address the following two problems of database development. First, the introduction of a new storage or memory technology typically requires the development of new versions of most out-of-core algorithms employed by the database system. Given the increasing popularity of hardware specialization, this leads to an arms race for the developers. To make things even worse, there exists no clear methodology for creating such algorithms and we must rely on significant creative effort to serve our need for out-of-core algorithms. To address this issue, we present the OCAS framework for the automatic synthesis of efficient out-of-core algorithms. The developer provides two independent inputs: 1) a memory-hierarchy-oblivious algorithm, expressed using a high-level specification language; and 2) a description of the target memory hierarchy. Using these specifications, our system is then able to automatically synthesize memory-hierarchy and storage-device-aware algorithms for tasks such as joins and sorting. The framework is extensible and quickly synthesizes custom out-of-core algorithms as new storage technologies become available. Second, from a software engineering point of view, years of performance-driven DBMS development have led to complicated, monolithic, low-level code bases, which are hard to maintain and extend. In particular, the introduction of new innovative approaches can be a very time-consuming task. To overcome such limitations, we present LegoBase, a query engine written in the high-level language, Scala. LegoBase realizes the abstraction without regret vision in the domain of analytical query processing. We show how by offering sufficiently powerful abstractions our system allows to easily implement a broad spectrum of optimizations which are difficult to achieve with existing approaches. Then, the key technique to regain efficiency is to apply generative programming and source-to-source compile the entire high-level Scala code to specialized, low-level C code. Our architecture significantly outperforms a commercial in-memory database system and an existing query compiler. LegoBase is the first step towards providing a full DBMS written in a high-level language

    Search-based Model-driven Loop Optimizations for Tensor Contractions

    Get PDF
    Complex tensor contraction expressions arise in accurate electronic structure models in quantum chemistry, such as the coupled cluster method. The Tensor Contraction Engine (TCE) is a high-level program synthesis system that facilitates the generation of high-performance parallel programs from tensor contraction equations. We are developing a new software infrastructure for the TCE that is designed to allow experimentation with optimization algorithms for modern computing platforms, including for heterogeneous architectures employing general-purpose graphics processing units (GPGPUs). In this dissertation, we present improvements and extensions to the loop fusion optimization algorithm, which can be used with cost models, e.g., for minimizing memory usage or for minimizing data movement costs under a memory constraint. We show that our data structure and pruning improvements to the loop fusion algorithm result in significant performance improvements that enable complex cost models being use for large input equations. We also present an algorithm for optimizing the fused loop structure of handwritten code. It determines the regions in handwritten code that are safe to be optimized and then runs the loop fusion algorithm on the dependency graph of the code. Finally, we develop an optimization framework for generating GPGPU code consisting of loop fusion optimization with a novel cost model, tiling optimization, and layout optimization. Depending on the memory available on the GPGPU and the sizes of the tensors, our framework decides which processor (CPU or GPGPU) should perform an operation and where the result should be moved. We present extensive measurements for tuning the loop fusion algorithm, for validating our optimization framework, and for measuring the performance characteristics of GPGPUs. Our measurements demonstrate that our optimization framework outperforms existing general-purpose optimization approaches both on multi-core CPUs and on GPGPUs

    Deductive Synthesis and Repair

    Get PDF
    In this thesis, we explore techniques for the development of recursive functional programs over unbounded domains that are proved correct according to their high-level specifications. We present algorithms for automatically synthesizing executable code, starting from the speci- fication alone. We implement these algorithms in the Leon system. We augment relational specifications with a concise notation for symbolic tests, which are are helpful to characterize fragments of the functionsâ behavior. We build on our synthesis procedure to automatically repair invalid functions by generating alternative implementations. Our approach therefore formulates program repair in the framework of deductive synthesis and uses the existing program structure as a hint to guide synthesis. We rely on user-specified tests as well as automatically generated ones to localize the fault. This localization enables our procedure to repair functions that would otherwise be out of reach of our synthesizer, and ensures that most of the original behavior is preserved. We also investigate multiple ways of enabling Leon programs to interact with external, un- trusted code. For that purpose, we introduce a precise inter-procedural effect analysis for arbitrary Scala programs with mutable state, dynamic object allocation, and dynamic dispatch. We analyzed the Scala standard library containing 58000 methods and classified them into sev- eral categories according to their effects. Our analysis proves that over one half of all methods are pure, identifies a number of conditionally pure methods, and computes summary graphs and regular expressions describing the side effects of non-pure methods. We implement the synthesis and repair algorithms within the Leon system and deploy them as part of a novel interactive development environment available as a web interface. Our implementation is able to synthesize, within seconds, a number of useful recursive functions that manipulate unbounded numbers and data structures. Our repair procedure automatically locates various kinds of errors in recursive functions and fixes them by synthesizing alternative implementations

    Compilation and Code Optimization for Data Analytics

    Get PDF
    The trade-offs between the use of modern high-level and low-level programming languages in constructing complex software artifacts are well known. High-level languages allow for greater programmer productivity: abstraction and genericity allow for the same functionality to be implemented with significantly less code compared to low-level languages. Modularity, object-orientation, functional programming, and powerful type systems allow programmers not only to create clean abstractions and protect them from leaking, but also to define code units that are reusable and easily composable, and software architectures that are adaptable and extensible. The abstraction, succinctness, and modularity of high-level code help to avoid software bugs and facilitate debugging and maintenance. The use of high-level languages comes at a performance cost: increased indirection due to abstraction, virtualization, and interpretation, and superfluous work, particularly in the form of tempory memory allocation and deallocation to support objects and encapsulation. As a result of this, the cost of high-level languages for performance-critical systems may seem prohibitive. The vision of abstraction without regret argues that it is possible to use high-level languages for building performance-critical systems that allow for both productivity and high performance, instead of trading off the former for the latter. In this thesis, we realize this vision for building different types of data analytics systems. Our means of achieving this is by employing compilation. The goal is to compile away expensive language features -- to compile high-level code down to efficient low-level code
    corecore