1,463 research outputs found

    LOAD BALANCING -SERVER AVAILABILITY ISSUE

    Get PDF
    ABSTRACT Recent research works done on various load balancing techniques on web servers has disclosed that the load balancing mechanisms used by most of the popular websites and web applications which are always overloaded with huge volumes of user requests, fail to provide fast and reliable services to its users. There could be many reasons behind it but the most common of all is server unavailability. Load balancing mechanisms provide guarantee of distributing load on server so that multiple server always get equal amount of load but they can't guarantee ever availability of the server it is redirecting request on. There could be a possibility where the server which is having minimum load goes down due to routing maintenance or any technical issues and load balancing mechanism may forward user requests constantly to that server due to minimal load without checking its availability. This type of problem may lead to downtime of entire application. In this paper, we evaluate the performance of proposed method for mitigating availability issue and compare the results with existing system

    HadoopT - breaking the scalability limits of Hadoop

    Get PDF
    The increasing use of computing resources in our daily lives leads to data generation at an astonishing rate. The computing industry is being repeatedly questioned for its ability to accommodate the unpredictable growth rate of data. It has encouraged the development of cluster based storage systems. Hadoop is a popular open source framework known for its massive cluster based storage. Hadoop is widely used in the computer industry because of its scalability, reliability and low cost of implementation. The data storage of the Hadoop cluster is managed by a user level distributed file system. To provide a scalable storage on the cluster, the file system metadata is decoupled and is managed by a centralized namespace server known as NameNode. Compute Nodes are primarily responsible for the data storage and processing. In this work, we analyze the limitations of Hadoop such as single point of access of the file system and fault tolerance of the cluster. The entire namespace of the Hadoop cluster is stored on a single centralized server which restricts the growth and data storage capacity. The efficiency and scalability of the cluster depends heavily on the performance of the single NameNode. Based on thorough investigation of Hadoop limitations, this thesis proposes a new architecture based on distributed metadata storage. The solution involves three layered architecture of Hadoop, first two layers for the metadata storage and a third layer storing actual data. The solution allows the Hadoop cluster to scale up further with the use of multiple NameNodes. The evaluation demonstrates effectiveness of the design by comparing its performance with the default Hadoop implementation

    Building high-performance web-caching servers

    Get PDF

    ACCUMULATING SOURCE EXPLOITATION OF VIRTUAL MACHINEFOR LOAD BALANCING IN CLOUD COMPUTING

    Get PDF
    Load balancing in cloud computing has assumed a pioneer job in improving the effectiveness. From 10 years prior there has been a speedy progression in the use of web and its applications. Appropriated computing is generally called web based computing where we rent the enrolling resources over the web. It is a remuneration for every usage show where you pay for the proportion of organizations rented. It gives different central focuses over the customary computing. With cloud computing expanding such a colossal vitality now days, the working environment culture is despite changing a similar number of people now particularly wants to work from home rather than going every day to office. There are three essential organizations gave by cloud that are SAAS, IAAS and PAAS. Load balancing is an incredibly main problem faced now days in cloud condition with the goal that the benefits are capably utilized. There are many load balancing algorithms available that are used to adjust the load of the client requests. In this paper we will propose a methodology which is a mix of Honeybee Foraging Algorithm, Active clustering algorithm and Ant Colony Optimization

    Predicting performance and scaling behaviour in a data center with multiple application servers

    Get PDF
    As web pages become more user friendly and interactive we see that objects such as pictures, media files, cgi scripts and databases are more frequently used. This development causes increased stress on the servers due to intensified cpu usage and a growing need for bandwidth to serve the content. At the same time users expect low latency and high availability. This dilemma can be solved by implementing load balancing between servers serving content to the clients. Load balancing can provide high availability through redundant server solutions, and reduce latency by dividing load. This paper describes a comparative study of different load balancing algorithms used to distribute packets among a set of equal web servers serving HTTP content. For packet redirection, a Nortel Application Switch 2208 will be used, and the servers will be hosted on 6 IBM bladeservers. We will compare three different algorithms: Round Robin, Least Connected and Response Time. We will look at properties such as response time, traffic intensity and type. How will these algorithms perform when these variables change with time. If we can find correlations between traffic intensity and efficiency of the algorithms, we might be able to deduce a theoretical suggestion on how to create an adaptive load balancing scheme that uses current traffic intensity to select the appropriate algorithm. We will also see how classical queueing algorithms can be used to calculate expected response times, and whether these numbers conform to the experimental results. Our results indicate that there are measurable differences between load balancing algorithms. We also found the performance of our servers to outperform the queueing models in most of the scenarios.Master i nettverks- og systemadministrasjo

    An optimized Load Balancing Technique for Virtual Machine Migration in Cloud Computing

    Get PDF
    Cloud computing (CC) is a service that uses subscription storage & computing power. Load balancing in distributed systems is one of the most critical pieces. CC has been a very interesting and important area of research because CC is one of the best systems that stores data with reduced costs and can be viewed over the internet at all times. Load balance facilitates maintaining high user retention & resource utilization by ensuring that each computing resource is correctly and properly distributed. This paper describes cloud-based load balancing systems. CC is virtualization of hardware like storage, computing, and security by virtual machines (VM). The live relocation of these machines provides many advantages, including high availability, hardware repair, fault tolerance, or workload balancing. In addition to various VM migration facilities, during the migration process, it is subject to significant security risks which the industry hesitates to accept. In this paper we have discussed CC besides this we also emphasize various existing load balancing algorithms, advantages& also we describe the PSO optimization technique

    Scalable and Reliable File Transfer for Clusters Using Multicast.

    Get PDF
    A cluster is a group of computing resources that are connected by a single computer network and are managed as a single system. Clusters potentially have three key advantages over workstations operated in isolation—fault tolerance, load balancing and support for distributed computing. Information sharing among the cluster’s resources affects all phases of cluster administration. The thesis describes a new tool for distributing files within clusters. This tool, the Scalable and Reliable File Transfer Tool (SRFTT), uses Forward Error Correction (FEC) and multiple multicast channels to achieve an efficient reliable file transfer, relative to heterogeneous clusters. SRFTT achieves scalability by avoiding feedback from the receivers. Tests show that, for large files, retransmitting recovery information on multiple multicast channels gives significant performance gains when compared to a single retransmission channel

    Load balancing techniques for I/O intensive tasks on heterogeneous clusters

    Get PDF
    Load balancing schemes in a cluster system play a critically important role in developing highperformance cluster computing platform. Existing load balancing approaches are concerned with the effective usage of CPU and memory resources. I/O-intensive tasks running on a heterogeneous cluster need a highly effective usage of global I/O resources, previous CPU-or memory-centric load balancing schemes suffer significant performance drop under I/O- intensive workload due to the imbalance of I/O load. To solve this problem, Zhang et al. developed two I/O-aware load-balancing schemes, which consider system heterogeneity and migrate more I/O-intensive tasks from a node with high I/O utilization to those with low I/O utilization. If the workload is memory-intensive in nature, the new method applies a memory-based load balancing policy to assign the tasks. Likewise, when the workload becomes CPU-intensive, their scheme leverages a CPU-based policy as an efficient means to balance the system load. In doing so, the proposed approach maintains the same level of performance as the existing schemes when I/O load is low or well balanced. Results from a trace-driven simulation study show that, when a workload is I/O-intensive, the proposed schemes improve the performance with respect to mean slowdown over the existing schemes by up to a factor of 8. In addition, the slowdowns of almost all the policies increase consistently with the system heterogeneity

    Locality-Aware Request Distribution in Cluster-Based Network Servers

    Get PDF
    We consider cluster-based network servers in which a front-end directs incoming requests to one of a number of back-ends. Specifically, we consider content-based request distribution: the front-end uses the content requested, in addition to information about the load on the back-end nodes, to choose which back-end will handle this request. Content-based request distribution can improve locality in the back-ends’ main memory caches, increase secondary storage scalability by partitioning the server’s database, and provide the ability to employ back-end nodes that are specialized for certain types of requests. As a specific policy for content-based request distribution, we introduce a simple, practical strategy for locality-aware request distribution (LARD). With LARD, the front-end distributes incoming requests in a manner that achieves high locality in the back-ends’ main memory caches as well as load balancing. Locality is increased by dynamically subdividing the server’s working set over the back-ends. Trace-based simulation results and measurements on a prototype implementation demonstrate substantial performance improvements over state-of-the-art approaches that use only load information to distribute requests. On workloads with working sets that do not fit in a single server node’s main memory cache, the achieved throughput exceeds that of the state-of-the-art approach by a factor of two to four. With content-based distribution, incoming requests must be handed off to a back-end in a manner transparent to the client, after the front-end has inspected the content of the request. To this end, we introduce an efficient TCP handoflprotocol that can hand off an established TCP connection in a client-transparent manner
    corecore