4 research outputs found

    A Low Cost Two-Tier Architecture Model For High Availability Clusters Application Load Balancing

    Full text link
    This article proposes a design and implementation of a low cost two-tier architecture model for high availability cluster combined with load-balancing and shared storage technology to achieve desired scale of three-tier architecture for application load balancing e.g. web servers. The research work proposes a design that physically omits Network File System (NFS) server nodes and implements NFS server functionalities within the cluster nodes, through Red Hat Cluster Suite (RHCS) with High Availability (HA) proxy load balancing technologies. In order to achieve a low-cost implementation in terms of investment in hardware and computing solutions, the proposed architecture will be beneficial. This system intends to provide steady service despite any system components fails due to uncertainly such as network system, storage and applications.Comment: Load balancing, high availability cluster, web server cluster

    APPLIED MACHINE LEARNING IN LOAD BALANCING

    Get PDF
    A common way to maintain the quality of service on systems that are growing rapidly is by increasing server specifications or by adding servers. The utility of servers can be balanced with the presence of a load balancer to manage server loads. In this paper, we propose a machine learning algorithm that utilizes server resources CPU and memory to forecast the future of resources server loads. We identify the timespan of forecasting should be long enough to avoid dispatcher's lack of information server distribution at runtime. Additionally, server profile pulling, forecasting server resources, and dispatching should be asynchronous with the request listener of the load balancer to minimize response delay. For production use, we recommend that the load balancer should have friendly user interface to make it easier to be configured, such as adding resources of servers as parameter criteria. We also recommended from beginning to start to save the log data server resources because the more data to process, the more accurate prediction of server load will be

    Rancang Bangun Sistem Penyeimbang Beban Pada Klaster Server Dengan Prioritas Berbasis Konten Dan Kontrol Ketersediaan Layanan

    Get PDF
    Aplikasi berbasis web semakin diminati untuk berbagai proses bisnis yang ada di lingkungan kita. Salah satu yang menjadi sorotan adalah penerimaan peserta didik baru yang dilaksanakan secara online. Dengan adanya aplikasi ini pengguna akan dihadapkan pada dua jenis halaman yaitu halaman pengisian informasi untuk daftar dan halaman untuk menampilkan informasi. Tentu dengan pengaturan yang biasa, server akan melayani dua jenis halaman ini secara bersamaan. Kondisi ini akan mengakibatkan bottle neck atau penumpukan permintaan. Muncul gagasan untuk membagi beban kerja ke komputer lain agar setiap permintaan yang masuk dapat dilayani. Gagasan ini sudah biasa dilakukan dengan menggunakan Nginx sebagai balancer. Namun dengan beragamnya tipe permintaan pengguna, waktu yang dibutuhkan untuk melayani menjadi tidak stabil. Gagasan lain muncul dengan adanya pembagian beban kerja berdasarkan konten yang diakses pengguna. Konten ini dapat diartikan sebagai dua jenis halaman sebelumnya. NodeJS akan bekerja sebagai balancer dan membaca setiap permintaan pengguna dan mengarahkan permintaan kepada worker yang sesuai untuk setiap URL yang diakses pengguna. Dibantu dengan MongoDB sebagai basis data, NodeJS akan bekerja lebih konsisten terhadap data masuk ke sistem. Hasil menunjukkan dengan NodeJS dan prioritas berbasis konten, waktu respon meningkat seiring dengan bertambahnya permintaan yang masuk. Namun dengan terfokusnya kerja worker untuk melayani satu jenis halaman, membuat setiap permintaan dapat terlayani hingga permintaan selesai. =============================================================================================== Web-based application become viral in our living society nowadays. One of those application is an on line application for student enrollment. In this kind of application user will access two type of page, the first one is a page with information from database or not and the second one is a page with an insert action to database. With a basic installation of server, server will serve this two type of page and make a bottle neck condition for high access. There is an idea to load balance every request to another computer, so every request can be handled. Usually, system administrator will use Nginx as a load balancer. But with multiple kind of request, response time to make every request handled, become unstable. Another idea appear, every request will be load balance with content based priority. Content is represented as two type of page before. Node JS will be used as load balancer system and read every header of request and redirect request to specific worker. With MongoDB as a database of system, NodeJS will work consistently for every data that come to the system. The result show that NodeJS and content-based priority will increase response time for increasing number of access from user. But the mechanism to make a worker focus on one type of page, make every request will be server well

    An adaptive admission control and load balancing algorithm for a QoS-aware Web system

    Get PDF
    The main objective of this thesis focuses on the design of an adaptive algorithm for admission control and content-aware load balancing for Web traffic. In order to set the context of this work, several reviews are included to introduce the reader in the background concepts of Web load balancing, admission control and the Internet traffic characteristics that may affect the good performance of a Web site. The admission control and load balancing algorithm described in this thesis manages the distribution of traffic to a Web cluster based on QoS requirements. The goal of the proposed scheduling algorithm is to avoid situations in which the system provides a lower performance than desired due to servers' congestion. This is achieved through the implementation of forecasting calculations. Obviously, the increase of the computational cost of the algorithm results in some overhead. This is the reason for designing an adaptive time slot scheduling that sets the execution times of the algorithm depending on the burstiness that is arriving to the system. Therefore, the predictive scheduling algorithm proposed includes an adaptive overhead control. Once defined the scheduling of the algorithm, we design the admission control module based on throughput predictions. The results obtained by several throughput predictors are compared and one of them is selected to be included in our algorithm. The utilisation level that the Web servers will have in the near future is also forecasted and reserved for each service depending on the Service Level Agreement (SLA). Our load balancing strategy is based on a classical policy. Hence, a comparison of several classical load balancing policies is also included in order to know which of them better fits our algorithm. A simulation model has been designed to obtain the results presented in this thesis
    corecore