480 research outputs found

    Staircase Join: Teach a Relational DBMS to Watch its (Axis) Steps

    Get PDF
    Relational query processors derive much of their effectiveness from the awareness of specific table properties like sort order, size, or absence of duplicate tuples. This text applies (and adapts) this successful principle to database-supported XML and XPath processing: the relational system is made tree aware, i.e., tree properties like subtree size, intersection of paths, inclusion or disjointness of subtrees are made explicit. We propose a local change to the database kernel, the staircase join, which encapsulates the necessary tree knowledge needed to improve XPath performance. Staircase join operates on an XML encoding which makes this knowledge available at the cost of simple integer operations (e.g., +, <=). We finally report on quite promising experiments with a staircase join enhanced main-memory database kernel

    Durability of Wireless Charging Systems Embedded Into Concrete Pavements for Electric Vehicles

    Get PDF
    Point clouds are widely used in various applications such as 3D modeling, geospatial analysis, robotics, and more. One of the key advantages of 3D point cloud data is that, unlike other data formats like texture, it is independent of viewing angle, surface type, and parameterization. Since each point in the point cloud is independent of the other, it makes it the most suitable source of data for tasks like object recognition, scene segmentation, and reconstruction. Point clouds are complex and verbose due to the numerous attributes they contain, many of which may not be always necessary for rendering, making retrieving and parsing a heavy task. As Sensors are becoming more precise and popular, effectively streaming, processing, and rendering the data is also becoming more challenging. In a hierarchical continuous LOD system, the previously fetched and rendered data for a region may become unavailable when revisiting it. To address this, we use a non-persistence cache using hash-map which stores the parsed point attributes, which still has some limitations, such as the dataset needing to be refetched and reprocessed if the tab or browser is closed and reopened which can be addressed by persistence caching. On the web, popularly persistence caching involves storing data in server memory, or an intermediate caching server like Redis. This is not suitable for point cloud data where we have to store parsed and processed large point data making point cloud visualization rely only on non-persistence caching. The thesis aims to contribute toward better performance and suitability of point cloud rendering on the web reducing the number of read requests to the remote file to access data.We achieve this with the application of client-side-based LRU Cache and Private File Open Space as a combination of both persistence and non-persistence caching of data. We use a cloud-optimized data format, which is better suited for web and streaming hierarchical data structures. Our focus is to improve rendering performance using WebGPU by reducing access time and minimizing the amount of data loaded in GPU. Preliminary results indicate that our approach significantly improves rendering performance and reduce network request when compared to traditional caching methods using WebGPU

    Improving the Performance and Endurance of Persistent Memory with Loose-Ordering Consistency

    Full text link
    Persistent memory provides high-performance data persistence at main memory. Memory writes need to be performed in strict order to satisfy storage consistency requirements and enable correct recovery from system crashes. Unfortunately, adhering to such a strict order significantly degrades system performance and persistent memory endurance. This paper introduces a new mechanism, Loose-Ordering Consistency (LOC), that satisfies the ordering requirements at significantly lower performance and endurance loss. LOC consists of two key techniques. First, Eager Commit eliminates the need to perform a persistent commit record write within a transaction. We do so by ensuring that we can determine the status of all committed transactions during recovery by storing necessary metadata information statically with blocks of data written to memory. Second, Speculative Persistence relaxes the write ordering between transactions by allowing writes to be speculatively written to persistent memory. A speculative write is made visible to software only after its associated transaction commits. To enable this, our mechanism supports the tracking of committed transaction ID and multi-versioning in the CPU cache. Our evaluations show that LOC reduces the average performance overhead of memory persistence from 66.9% to 34.9% and the memory write traffic overhead from 17.1% to 3.4% on a variety of workloads.Comment: This paper has been accepted by IEEE Transactions on Parallel and Distributed System

    SPM management using markov chain based data access prediction

    Get PDF
    Leveraging the power of scratchpad memories (SPMs) available in most embedded systems today is crucial to extract maximum performance from application programs. While regular accesses like scalar values and array expressions with affine subscript functions have been tractable for compiler analysis (to be prefetched into SPM), irregular accesses like pointer accesses and indexed array accesses have not been easily amenable for compiler analysis. This paper presents an SPM management technique using Markov chain based data access prediction for such irregular accesses. Our approach takes advantage of inherent, but hidden reuse in data accesses made by irregular references. We have implemented our proposed approach using an optimizing compiler. In this paper, we also present a thorough comparison of our different dynamic prediction schemes with other SPM management schemes. SPM management using our approaches produces 12.7% to 28.5% improvements in performance across a range of applications with both regular and irregular access patterns, with an average improvement of 20.8%
    corecore