51 research outputs found

    Physical Plan Instrumentation in Databases: Mechanisms and Applications

    Get PDF
    Database management systems (DBMSs) are designed with the goal set to compile SQL queries to physical plans that, when executed, provide results to the SQL queries. Building on this functionality, an ever-increasing number of application domains (e.g., provenance management, online query optimization, physical database design, interactive data profiling, monitoring, and interactive data visualization) seek to operate on how queries are executed by the DBMS for a wide variety of purposes ranging from debugging and data explanation to optimization and monitoring. Unfortunately, DBMSs provide little, if any, support to facilitate the development of this class of important application domains. The effect is such that database application developers and database system architects either rewrite the database internals in ad-hoc ways; work around the SQL interface, if possible, with inevitable performance penalties; or even build new databases from scratch only to express and optimize their domain-specific application logic over how queries are executed. To address this problem in a principled manner in this dissertation, we introduce a prototype DBMS, namely, Smoke, that exposes instrumentation mechanisms in the form of a framework to allow external applications to manipulate physical plans. Intuitively, a physical plan is the underlying representation that DBMSs use to encode how a SQL query will be executed, and providing instrumentation mechanisms at this representation level allows applications to express and optimize their logic on how queries are executed. Having such an instrumentation-enabled DBMS in-place, we then consider how to express and optimize applications that rely their logic on how queries are executed. To best demonstrate the expressive and optimization power of instrumentation-enabled DBMSs, we express and optimize applications across several important domains including provenance management, interactive data visualization, interactive data profiling, physical database design, online query optimization, and query discovery. Expressivity-wise, we show that Smoke can express known techniques, introduce novel semantics on known techniques, and introduce new techniques across domains. Performance-wise, we show case-by-case that Smoke is on par with or up-to several orders of magnitudes faster than state-of-the-art imperative and declarative implementations of important applications across domains. As such, we believe our contributions provide evidence and form the basis towards a class of instrumentation-enabled DBMSs with the goal set to express and optimize applications across important domains with core logic over how queries are executed by DBMSs

    An investigation of computer based nominal data record linkage

    Get PDF
    The Internet now provides access to vast volumes of nominal data (data associated with names e. g. birth/death records, parish records, text articles, multimedia) collected for a range of different purposes. This research focuses on parish registers containing baptism, marriage, and burial records. Mining these data resources involves linkage investigating as to how two records are related with regards to attributes like surname, spatio-temporal location, legal association and inter-relationships. Furthermore, as well as handling the implicit constraints of nominal data, such a system must also be able to handle automatically a range of temporal and spatial rules and constraints. The research examines the linkage rules that apply and how such rules interact. In this investigation a report is given of the current practices in several disciplines (e. g. history, demography, genealogy, and epidemiology) and how these are implemented in current computer and database systems. The practical aspects of this study, and the workbench approach proposed are centred on the extensive Lancashire & Cheshire Parish Register archive held on the MIMAS database computer located at Manchester University. The research also proposes how these findings can have wider applications. This thesis describes some initial research into this problem. It describes three prototypes of nominal data workbench that allow the specification and examination of several linkage types and discusses the merits of alternative name matching methods, name grouping techniques and method comparisons. The conclusion is that in the cases examined so far, effective nominal data linkage is essentially a query optimisation process. The process is made more efficient if linkage specific indexes exist, and suggests that query re-organization based on these indexes, though a complex process, is entirely feasible. To facilitate the use of indexes and to guide the optimization process, the work suggests the use of formal ontologies

    16th SC@RUG 2019 proceedings 2018-2019

    Get PDF

    Provenance, Incremental Evaluation, and Debugging in Datalog

    Get PDF
    The Datalog programming language has recently found increasing traction in research and industry. Driven by its clean declarative semantics, along with its conciseness and ease of use, Datalog has been adopted for a wide range of important applications, such as program analysis, graph problems, and networking. To enable this adoption, modern Datalog engines have implemented advanced language features and high-performance evaluation of Datalog programs. Unfortunately, critical infrastructure and tooling to support Datalog users and developers are still missing. For example, there are only limited tools addressing the crucial debugging problem, where developers can spend up to 30% of their time finding and fixing bugs. This thesis addresses Datalog’s tooling gaps, with the ultimate goal of improving the productivity of Datalog programmers. The first contribution is centered around the critical problem of debugging: we develop a new debugging approach that explains the execution steps taken to produce a faulty output. Crucially, our debugging method can be applied for large-scale applications without substantially sacrificing performance. The second contribution addresses the problem of incremental evaluation, which is necessary when program inputs change slightly, and results need to be recomputed. Incremental evaluation allows this recomputation to happen more efficiently, without discarding the previous results and recomputing from scratch. Finally, the last contribution provides a new incremental debugging approach that identifies the root causes of faulty outputs that occur after an incremental evaluation. Incremental debugging focuses on the relationship between input and output and can provide debugging suggestions to amend the inputs so that faults no longer occur. These techniques, in combination, form a corpus of critical infrastructure and tooling developments for Datalog, allowing developers and users to use Datalog more productively
    • …
    corecore