24 research outputs found

    Performance Optimization for Distributed Database Based on Cache Investment

    Get PDF
    As technology plays important role in every aspect of life especially in the industrial field, a vast amount of data is generated for controlling and monitoring tools and to help in system development. That results in an increased size of data, which affects the speed and performance of applications/programs. Here a design strategy is proposed to show how to improve both the speed and performance of computer applications by improving the performance of database queries. Key factors that determine computer performance (speed and performance of the processor) are the processor speed, the size of RAM, and the cache memory strategy for the processor. In this paper, we are introducing a solution that is proven to be a tool to increase the performance of queries. It will help to enhance the performance of database queries responsiveness irrespective of the database size. The proposed policy is built-up on the caching concepts i.e. cache investment. Cache investment is a method to combine query optimization and data placement. This work on the concept of investment looks beyond the performance of a single query and helps in achieving a better hit ratio in a long term for large database systems. This paper, discuss and explain the design, architecture and working of the proposed policy. The results show how this proposed policy helps in improving the performance of the database, especially relevant for today’s “big data” environment

    Research of an Entity-component-system architectural pattern designed with using of Data-oriented design technique

    Get PDF
    The purpose of this article is to present and evaluate Entity-component-system architecture designed based on data. The solution allows for improving application development process and increasing its efficiency. A test application was prepared for research using custom solutions. Evaluated techniques was compared with object-oriented programming in the article

    Improving Cache Hits On Replacment Blocks Using Weighted LRU-LFU Combinations

    Get PDF
    Block replacement refers to the process of selecting a block of data or a cache line to be evicted or replaced when a new block needs to be brought into a cache or a memory hierarchy. In computer systems, block replacement policies are used in caching mechanisms, such as in CPU caches or disk caches, to determine which blocks are evicted when the cache is full and new data needs to be fetched. The combination of LRU (Least Recently Used) and LFU (Least Frequently Used) in a weighted manner is known as the "LFU2" algorithm. LFU2 is an enhanced caching algorithm that aims to leverage the benefits of both LRU and LFU by considering both recency and frequency of item access. In LFU2, each item in the cache is associated with two counters: the usage counter and the recency counter. The usage counter tracks the frequency of item access, while the recency counter tracks the recency of item access. These counters are used to calculate a combined weight for each item in the cache. Based on the experimental results, the LRU-LFU combination method succeeded in increasing cache hits from 94.8% on LFU and 95.5% on LFU to 96.6%

    Improving GPU Shared Memory Access Efficiency

    Get PDF
    Graphic Processing Units (GPUs) often employ shared memory to provide efficient storage for threads within a computational block. This shared memory includes multiple banks to improve performance by enabling concurrent accesses across the memory banks. Conflicts occur when multiple memory accesses attempt to simultaneously access a particular bank, resulting in serialized access and concomitant performance reduction. Identifying and eliminating these memory bank access conflicts becomes critical for achieving high performance on GPUs; however, for common 1D and 2D access patterns, understanding the potential bank conflicts can prove difficult. Current GPUs support memory bank accesses with configurable bit-widths; optimizing these bitwidths could result in data layouts with fewer conflicts and better performance. This dissertation presents a framework for bank conflict analysis and automatic optimization. Given static access pattern information for a kernel, this tool analyzes the conflict number of each pattern, and then searches for an optimized solution for all shared memory buffers. This data layout solution is based on parameters for inter-padding, intrapadding, and the bank access bit-width. The experimental results show that static bank conflict analysis is a practical solution and independent of the workload size of a given access pattern. For 13 kernels from 6 benchmarks suites (RODINIA and NVIDIA CUDA SDK) facing shared memory bank conflicts, tests indicated this approach can gain 5%- 35% improvement in runtime
    corecore