2 research outputs found

    Load balancing by using machine learning in CPU-GPU heterogeneous database management system

    No full text
    Conventional OLTP systems are slow in performance for analytical queries. In the existing heterogeneous architecture OLAP database management systems, no system distributes work using machine learning. In this study, the DOLAP architecture, which is a high-performance column-based database management system developed for shared memory architectures, is explained. Also, job distribution algorithms based on heuristic and machine learning methods have been developed for computing hardware with different characters such as CPU and GPU on the server on which the database is running, and their performance has been analyze

    Machine learning-based load distribution and balancing in heterogeneous database management systems

    No full text
    For dynamic and continuous data analysis, conventional OLTP systems are slow in performance. Today's cutting-edge high-performance computing hardware, such as GPUs, has been used as accelerators for data analysis tasks, which traditionally leverage CPUs on classical database management systems (DBMS). When CPUs and GPUs are used together, the architectural heterogeneity, that is, leveraging hardware with different performance characteristics jointly, creates complex problems that need careful treatment for performance optimization. Load distribution and balancing are crucial problems for DBMSs working on heterogeneous architectures. In this work, focusing on a hybrid, CPU-GPU database management system to process users' queries, we propose heuristical and machine-learning-based (ML-based) load distribution and balancing models. In more detail, we employ multiple linear regression (MLR), random forest (RF), and Adaboost (Ada) models to dynamically decide the processing unit for each incoming query based on the response time predictions on both CPU and GPU. The ML-based models outperformed the other algorithms, as well as the CPU and GPU-only running modes with up to 27%, 29%, and 40%, respectively, in overall performance (response time) while answering intense real-life working scenarios. Finally, we propose to use a hybrid load-balancing model that would be more efficient than the models we tested in this work
    corecore