172 research outputs found

    Building Efficient Query Engines in a High-Level Language

    Get PDF
    Abstraction without regret refers to the vision of using high-level programming languages for systems development without experiencing a negative impact on performance. A database system designed according to this vision offers both increased productivity and high performance, instead of sacrificing the former for the latter as is the case with existing, monolithic implementations that are hard to maintain and extend. In this article, we realize this vision in the domain of analytical query processing. We present LegoBase, a query engine written in the high-level language Scala. The key technique to regain efficiency is to apply generative programming: LegoBase performs source-to-source compilation and optimizes the entire query engine by converting the high-level Scala code to specialized, low-level C code. We show how generative programming allows to easily implement a wide spectrum of optimizations, such as introducing data partitioning or switching from a row to a column data layout, which are difficult to achieve with existing low-level query compilers that handle only queries. We demonstrate that sufficiently powerful abstractions are essential for dealing with the complexity of the optimization effort, shielding developers from compiler internals and decoupling individual optimizations from each other. We evaluate our approach with the TPC-H benchmark and show that: (a) With all optimizations enabled, LegoBase significantly outperforms a commercial database and an existing query compiler. (b) Programmers need to provide just a few hundred lines of high-level code for implementing the optimizations, instead of complicated low-level code that is required by existing query compilation approaches. (c) The compilation overhead is low compared to the overall execution time, thus making our approach usable in practice for compiling query engines

    How to Architect a Query Compiler

    Get PDF
    This paper studies architecting query compilers. The state of the art in query compiler construction is lagging behind that in the compilers field. We attempt to remedy this by exploring the key causes of technical challenges in need of well founded solutions, and by gathering the most relevant ideas and approaches from the PL and compilers communities for easy digestion by database researchers. All query compilers known to us are more or less monolithic template expanders that do the bulk of the compilation task in one large leap. Such systems are hard to build and maintain. We propose to use a stack of multiple DSLs on different levels of abstraction with lowering in multiple steps to make query compilers easier to build and extend, ultimately allowing us to create more convincing and sustainable compiler-based data management systems. We attempt to derive our advice for creating such DSL stacks from widely acceptable principles. We have also re-created a well-known query compiler following these ideas and report on this effort

    Parallel programming paradigms and frameworks in big data era

    Get PDF
    With Cloud Computing emerging as a promising new approach for ad-hoc parallel data processing, major companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs. We have entered the Era of Big Data. The explosion and profusion of available data in a wide range of application domains rise up new challenges and opportunities in a plethora of disciplines-ranging from science and engineering to biology and business. One major challenge is how to take advantage of the unprecedented scale of data-typically of heterogeneous nature-in order to acquire further insights and knowledge for improving the quality of the offered services. To exploit this new resource, we need to scale up and scale out both our infrastructures and standard techniques. Our society is already data-rich, but the question remains whether or not we have the conceptual tools to handle it. In this paper we discuss and analyze opportunities and challenges for efficient parallel data processing. Big Data is the next frontier for innovation, competition, and productivity, and many solutions continue to appear, partly supported by the considerable enthusiasm around the MapReduce paradigm for large-scale data analysis. We review various parallel and distributed programming paradigms, analyzing how they fit into the Big Data era, and present modern emerging paradigms and frameworks. To better support practitioners interesting in this domain, we end with an analysis of on-going research challenges towards the truly fourth generation data-intensive science.Peer ReviewedPostprint (author's final draft

    Efficient query processing in managed runtimes

    Get PDF
    This thesis presents strategies to improve the query evaluation performance over huge volumes of relational-like data that is stored in the memory space of managed applications. Storing and processing application data in the memory space of managed applications is motivated by the convergence of two recent trends in data management. First, dropping DRAM prices have led to memory capacities that allow the entire working set of an application to fit into main memory and to the emergence of in-memory database systems (IMDBs). Second, language-integrated query transparently integrates query processing syntax into programming languages and, therefore, allows complex queries to be composed in the application. IMDBs typically serve as data stores to applications written in an object-oriented language running on a managed runtime. In this thesis, we propose a deeper integration of the two by storing all application data in the memory space of the application and using language-integrated query, combined with query compilation techniques, to provide fast query processing. As a starting point, we look into storing data as runtime-managed objects in collection types provided by the programming language. Queries are formulated using language-integrated query and dynamically compiled to specialized functions that produce the result of the query in a more efficient way by leveraging query compilation techniques similar to those used in modern database systems. We show that the generated query functions significantly improve query processing performance compared to the default execution model for language-integrated query. However, we also identify additional inefficiencies that can only be addressed by processing queries using low-level techniques which cannot be applied to runtime-managed objects. To address this, we introduce a staging phase in the generated code that makes query-relevant managed data accessible to low-level query code. Our experiments in .NET show an improvement in query evaluation performance of up to an order of magnitude over the default language-integrated query implementation. Motivated by additional inefficiencies caused by automatic garbage collection, we introduce a new collection type, the black-box collection. Black-box collections integrate the in-memory storage layer of a relational database system to store data and hide the internal storage layout from the application by employing existing object-relational mapping techniques (hence, the name black-box). Our experiments show that black-box collections provide better query performance than runtime-managed collections by allowing the generated query code to directly access the underlying relational in-memory data store using low-level techniques. Black-box collections also outperform a modern commercial database system. By removing huge volumes of collection data from the managed heap, black-box collections further improve the overall performance and response time of the application and improve the application’s scalability when facing huge volumes of collection data. To enable a deeper integration of the data store with the application, we introduce self-managed collections. Self-managed collections are a new type of collection for managed applications that, in contrast to black-box collections, store objects. As the data elements stored in the collection are objects, they are directly accessible from the application using references which allows for better integration of the data store with the application. Self-managed collections manually manage the memory of objects stored within them in a private heap that is excluded from garbage collection. We introduce a special collection syntax and a novel type-safe manual memory management system for this purpose. As was the case for black-box collections, self-managed collections improve query performance by utilizing a database-inspired data layout and allowing the use of low-level techniques. By also supporting references between collection objects, they outperform black-box collections

    Abstraction without regret in database systems building: a manifesto

    Get PDF
    It has been said that all problems in computer science can be solved by adding another level of indirection, except for performance problems, which are solved by removing levels of indirection. Compilers are our tools for removing levels of indirection automatically. However, we do not trust them when it comes to systems building. Most performance-critical systems are built in low-level programming languages such as C. Some of the downsides of this compared to using modern high-level programming languages are very well known: bugs, poor programmer productivity, a talent bottleneck, and cruelty to programming language researchers. In the future we might even add suboptimal performance to this list. In this article, I argue that compilers can be competitive with and outperform human experts at low-level database systems programming. Performance-critical database systems are a limited-enough domain for us to encode systems programming skills as compiler optimizations. In a large system, a human expert's occasional stroke of creativity producing an original and very specific coding trick is outweighed by a compiler's superior stamina, optimizing code at a level of consistency that is absent even in very mature codebases. However, mainstream compilers cannot do this: We need to work on optimizing compilers specialized for the systems programming domain. Recent progress makes their creation eminently feasible

    Functional Collection Programming with Semi-Ring Dictionaries

    Get PDF
    This paper introduces semi-ring dictionaries, a powerful class of compositional and purely functional collections that subsume other collection types such as sets, multisets, arrays, vectors, and matrices. We developed SDQL, a statically typed language that can express relational algebra with aggregations, linear algebra, and functional collections over data such as relations and matrices using semi-ring dictionaries. Furthermore, thanks to the algebraic structure behind these dictionaries, SDQL unifies a wide range of optimizations commonly used in databases (DB) and linear algebra (LA). As a result, SDQL enables efficient processing of hybrid DB and LA workloads, by putting together optimizations that are otherwise confined to either DB systems or LA frameworks. We show experimentally that a handful of DB and LA workloads can take advantage of the SDQL language and optimizations. Overall, we observe that SDQL achieves competitive performance relative to Typer and Tectorwise, which are state-of-the-art in-memory DB systems for (flat, not nested) relational data, and achieves an average 2x speedup over SciPy for LA workloads. For hybrid workloads involving LA processing, SDQL achieves up to one order of magnitude speedup over Trance, a state-of-the-art nested relational engine for nested biomedical data, and gives an average 40% speedup over LMFAO, a state-of-the-art in-DB machine learning engine for two (flat) relational real-world retail datasets

    A Map-algebra-inspired Approach for Interacting With Wireless Sensor Networks, Cyber-physical Systems or Internet of Things

    Get PDF
    The typical approach for consuming data from wireless sensor networks (WSN) and Internet of Things (IoT) has been to send data back to central servers for processing and analysis. This thesis develops an alternative strategy for processing and acting on data directly in the environment referred to as Active embedded Map Algebra (AeMA). Active refers to the near real time production of data, and embedded refers to the architecture of distributed embedded sensor nodes. Network macroprogramming, a style of programming adopted for wireless sensor networks and IoT, addresses the challenges of coordinating the behavior of multiple connected devices through a high-level programming model. Several macroprogramming models have been proposed, but none to date has adopted a comprehensive spatial model. This thesis takes the unique approach of adapting the well-known Map Algebra model from Geographic Information Science to extend the functionality of WSN/IoT and the opportunities for user interaction with WSN/IoT. As an inherently spatial model, the Map Algebra-inspired metaphor supports the types of computation desired from a network of geographically dispersed WSN nodes. The AeMA data model aligns with the conceptual model of GIS layers and specific layer operations from Map Algebra. A declarative query and network tasking language, based on Map Algebra operations, provides the basis for operations and interactions. The model adds functionality to calculate and store time series and specific temporal summary-type composite objects as an extension to traditional Map Algebra. The AeMA encodes Map Algebra-inspired operations into an extensible Virtual Machine Runtime system, called MARS (Map Algebra Runtime System) that supports Map Algebra in an efficient and extensible way. Map algebra-like operations are performed in a distributed manner. Data do not leave the network but are analyzed and consumed in place. As a consequence, collected information is available in-situ to drive local actions. The conceptual model and tasking language are designed to direct nodes as active entities, able to perform some actions on their environment. This Map Algebra inspired network macroprogramming model has many potential applications for spatially deployed WSN/IoT networks. In particular the thesis notes its utility for precision agriculture applications

    Agnostic cloud services with kubernetes

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia Informática e de ComputadoresA computação na nuvem é frequentemente associada a restrições de dependência de fornecedor (Vendor Lock-In), motivado pelas diferentes tecnologias e implementações proprietárias que cada fornecedor de serviços em nuvem estabelece. Estas restrições consistem na dependência de um cliente relativamente a determinado fornecedor, o que dificulta a transição para outro fornecedor. Num contributo para uma Nuvem Agnóstica, o desafio descrito neste trabalho consiste na definição de um modelo de implantação e gestão do ciclo de vida de elementos computacionais em contexto de Nuvem. Por conseguinte, o objetivo do trabalho centra-se no desenvolvimento de um modelo que desacople a implantação e a gestão de sistemas informáticos do fornecedor de Nuvem, permitindo que sejam executados de forma agnóstica em diferentes plataformas de Nuvem. Neste âmbito, recorrer-se á a contentores, enquanto solução eficiente e padronizada de implantação de serviços computacionais em diferentes infraestruturas. Adicionalmente, pretende-se que o modelo automatize a geração de ficheiros de implantação, definindo as condições de execução do(s) serviço(s). Atualmente, as plataformas de orquestração de contentores são importantes aliados das organizações, sendo responsáveis pela gestão da implantação e configuração dos sistemas informáticos formados por múltiplos contentores. Existem diversas plataformas que surgem neste contexto, capazes de monitorizar o desempenho e controlar dinamicamente as configurações dos sistemas. Um exemplo paradigmático é a plataforma Kubernetes, que emerge como um standard aberto para serviços de Nuvem,cujo componente Cloud Controller Manager contribui para a abstração de fornecedores de Nuvem. Neste sentido, é considerada uma contribuição valiosa para atingir um modelo agnóstico de Nuvem. O sistema desenvolvido é validado através da implantação de aplicações (sistemas xi xii informáticos) contentorizadas, em múltiplos fornecedores de serviços em Nuvem, públicos ou on-premises (locais). Para este efeito, o quadro Informatics System of Systems é adotado, enquanto validador, como o modelo apropriado para estruturar e organizar os artefactos tecnológicos heterogéneos que podem ser considerados.The vendor lock-in concept represents a customer’s dependency on a particular supplier or vendor, eventually becoming unable to easily migrate to a different provider. Cloud computing is frequently associated with vendor lock-in restrictions, motivated by the proprietary technological arrangements of each provider. This work proposes an agnostic cloud provider model that addresses such challenges, focusing on the establishment of a model for deploying and managing computational services in cloud environments. Concretely, it aims to enable informatics systems to be executed agnostically on multiple cloud platforms and infrastructures, thereby decoupling them from any cloud provider. Moreover, this model intends to automate servisse deployment by defining and generating the running configurations for the services.Within this context, container technology is deemed as an efficient and standard strategy for deploying computational services across cloud providers, promoting the migration of informatics systems between vendors. Additionally, container orchestration platforms, which are becoming increasingly adopted by organizations, are essential to effectively manage the life-cycle of multi-container informatics systems by monitoring their performance, and dynamically controlling their behavior. In particular, the Kubernetes platform, an emerging open standard for cloud services, is proving to be a valuable contribution on achieving service agnostic deployment, namely with its Cloud Controller Manager mechanism, helping abstracting specific cloud providers. As validation for the proposed approach, it is intended to prove the model’s adaptability to different services and technologies supplied by heterogeneous organizations through the deployment of containerized applications (informatics systems) in multiple cloud service providers, public or on-premises. For this purpose, the Informatics System of Systems framework is adopted as a validator for structuring and organize heterogeneous technology artifacts from different suppliers.N/
    • …
    corecore