584 research outputs found

    AWSQ: an approximated web server queuing algorithm for heterogeneous web server cluster

    Get PDF
    With the rising popularity of web-based applications, the primary and consistent resource in the infrastructure of World Wide Web are cluster-based web servers. Overtly in dynamic contents and database driven applications, especially at heavy load circumstances, the performance handling of clusters is a solemn task. Without using efficient mechanisms, an overloaded web server cannot provide great performance. In clusters, this overloaded condition can be avoided using load balancing mechanisms by sharing the load among available web servers. The existing load balancing mechanisms which were intended to handle static contents will grieve from substantial performance deprivation under database-driven and dynamic contents. The most serviceable load balancing approaches are Web Server Queuing (WSQ), Server Content based Queue (QSC) and Remaining Capacity (RC) under specific conditions to provide better results. By Considering this, we have proposed an approximated web server Queuing mechanism for web server clusters and also proposed an analytical model for calculating the load of a web server. The requests are classified based on the service time and keep tracking the number of outstanding requests at each webserver to achieve better performance. The approximated load of each web server is used for load balancing. The investigational results illustrate the effectiveness of the proposed mechanism by improving the mean response time, throughput and drop rate of the server cluster

    APPLIED MACHINE LEARNING IN LOAD BALANCING

    Get PDF
    A common way to maintain the quality of service on systems that are growing rapidly is by increasing server specifications or by adding servers. The utility of servers can be balanced with the presence of a load balancer to manage server loads. In this paper, we propose a machine learning algorithm that utilizes server resources CPU and memory to forecast the future of resources server loads. We identify the timespan of forecasting should be long enough to avoid dispatcher's lack of information server distribution at runtime. Additionally, server profile pulling, forecasting server resources, and dispatching should be asynchronous with the request listener of the load balancer to minimize response delay. For production use, we recommend that the load balancer should have friendly user interface to make it easier to be configured, such as adding resources of servers as parameter criteria. We also recommended from beginning to start to save the log data server resources because the more data to process, the more accurate prediction of server load will be

    Performance Analysis of IO Intensive Task Allocation Strategies for Heterogeneous Web Servers

    Get PDF
    The current rate of growth of the World Wide Web has led to an explosion in internet traffic for many popular websites. To overcome the problem of falling quality of service for its customers an efficient approach would be to use a heterogeneous cluster of nodes which replicate the entire site data. In a centralized system, a master node would load balance the user requests and allocate them to the appropriate node. A web application which mainly provides file sharing services to its users offers a system where the tasks are basically of retrieval based nature and hence more IO intensive. In order to address the allocation problem of these tasks, several IO aware policies have been designed and compared with respect to certain standard performance metrics. The study shows that considering the IO nature of tasks yields significantly better results than other existing algorithms

    Internet of Things-aided Smart Grid: Technologies, Architectures, Applications, Prototypes, and Future Research Directions

    Full text link
    Traditional power grids are being transformed into Smart Grids (SGs) to address the issues in existing power system due to uni-directional information flow, energy wastage, growing energy demand, reliability and security. SGs offer bi-directional energy flow between service providers and consumers, involving power generation, transmission, distribution and utilization systems. SGs employ various devices for the monitoring, analysis and control of the grid, deployed at power plants, distribution centers and in consumers' premises in a very large number. Hence, an SG requires connectivity, automation and the tracking of such devices. This is achieved with the help of Internet of Things (IoT). IoT helps SG systems to support various network functions throughout the generation, transmission, distribution and consumption of energy by incorporating IoT devices (such as sensors, actuators and smart meters), as well as by providing the connectivity, automation and tracking for such devices. In this paper, we provide a comprehensive survey on IoT-aided SG systems, which includes the existing architectures, applications and prototypes of IoT-aided SG systems. This survey also highlights the open issues, challenges and future research directions for IoT-aided SG systems

    Agent Based Test and Repair of Distributed Systems

    Get PDF
    This article demonstrates how to use intelligent agents for testing and repairing a distributed system, whose elements may or may not have embedded BIST (Built-In Self-Test) and BISR (Built-In Self-Repair) facilities. Agents are software modules that perform monitoring, diagnosis and repair of the faults. They form together a society whose members communicate, set goals and solve tasks. An experimental solution is presented, and future developments of the proposed approach are explore

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
    corecore