8 research outputs found

    Query Optimization Techniques for OLAP Applications: An ORACLE versus MS-SQL Server Comparative Study

    Get PDF
    Query optimization in OLAP applications is a novel problem. A lot of research was introduced in the area of optimizing query performance, however great deal of research focused on OLTP applications rather than OLAP. In order to reach the output results OLAP queries extensively asks the database, inefficient processing of those queries will have its negative impact on the performance and may make the results useless. Techniques for optimizing queries include memory caching, indexing, hardware solutions, and physical database storage. Oracle and MS SQL Server both offer OLAP optimization techniques, the paper will review both packages’ approaches and then proposes a query optimization strategy for OLAP applications. The proposed strategy is based on use of the following four ingredients: 1- intermediate queries; 2- indexes both BTrees and Bitmaps; 3- memory cache (for the syntax of the query) and secondary storage cache (for the result data set); and 4- the physical database storage (i.e. binary storage model) accompanied by its hardware solution

    Incremental file reorganization schemes

    Get PDF
    Issued as Final project report, Project no. G-36-66

    Use of a weighted matching algorithm to sequence clusters in spatial join processing

    Get PDF
    One of the most expensive operations in a spatial database is spatial join processing. This study focuses on how to improve the performance of such processing. The main objective is to reduce the Input/Output (I/O) cost of the spatial join process by using a technique called cluster-scheduling. Generally, the spatial join is processed in two steps, namely filtering and refinement. The cluster-scheduling technique is performed after the filtering step and before the refinement step and is part of the housekeeping phase. The key point of this technique is to realise order wherein two consecutive clusters in the sequence have maximal overlapping objects. However, finding the maximal overlapping order has been shown to be Nondeterministic Polynomial-time (NP)-complete. This study proposes an algorithm to provide approximate maximal overlapping (AMO) order in a Cluster Overlapping (CO) graph. The study proposes the use of an efficient maximum weighted matching algorithm to solve the problem of finding AMO order. As a result, the I/O cost in spatial join processing can be minimised

    Scalable Logic Defined Static Analysis

    Get PDF
    Logic languages such as Datalog have been proposed as a method for specifying flexible and customisable static analysers. Using Datalog, various classes of static analyses can be expressed precisely and succinctly, requiring fewer lines of code than hand-crafted analysers. In this paradigm, a static analysis specification is encoded by a set of declarative logic rules and an o -the-shelf solver is used to compute the result of the static analysis. Unfortunately, when large-scale analyses are employed, Datalog-based tools currently fail to scale in comparison to hand-crafted static analysers. As a result, Datalog-based analysers have largely remained an academic curiosity, rather than industrially respectful tools. This thesis outlines our e orts in understanding the sources of performance limitations in Datalog-based tools. We propose a novel evaluation technique that is predicated on the fact that in the case of static analysis, the logical specification is a design time artefact and hence does not change during evaluation. Thus, instead of directly evaluating Datalog rules, our approach leverages partial evaluation to synthesise a specialised static analyser from these rules. This approach enables a novel indexing optimisations that automatically selects an optimal set of indexes to speedup and minimise memory usage in the Datalog computation. Lastly, we explore the case of more expressive logics, namely, constrained Horn clause and their use in proving the correctness of programs. We identify a bottleneck in various symbolic evaluation algorithms that centre around Craig interpolation. We propose a method of improving these evaluation algorithms by a proposing a method of guiding theorem provers to discover relevant interpolants with respect to the input logic specification. The culmination of our work is implemented in a general-purpose and highperformance tool called Souffl´e. We describe Souffl´e and evaluate its performance experimentally, showing significant improvement over alternative techniques and its scalability in real-world industrial use cases

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation
    corecore