11,888 research outputs found

    Synchronized Multi-Load Balancer with Fault Tolerance in Cloud

    Full text link
    In this method, service of one load balancer can be borrowed or shared among other load balancers when any correction is needed in the estimation of the load.Comment: 8 Pages, 10 figure

    Factors Influencing Job Rejections in Cloud Environment

    Full text link
    The IT organizations invests heavy capital by consuming large scale infrastructure and advanced operating platforms. The advances in technology has resulted in emergence of cloud computing, which is promising technology to achieve the aforementioned objective. At the peak hours, the jobs arriving to the cloud system are normally high demanding efficient execution and dispatch. An observation that has been carried out in this paper by capturing a job arriving pattern from a monitoring system explains that most of the jobs get rejected because of lack of efficient technology. The job rejections can be controlled by certain factors such as job scheduling and load balancing. Therefore, in this paper the efficiency of Round Robin (RR) scheduling strategy used for job scheduling and Shortest Job First Scheduling (SJFS) technique used for load balancing in reducing the job rejections are analyzed. Further, a proposal for an effective load balancing approach to avoid deadlocks has been discussed.Comment: 6 Pages, 5 Figures, 8 Table

    Enhanced Load Balancing Approach to Avoid Deadlocks in Cloud

    Full text link
    The state-of-art of the technology focuses on data processing to deal with massive amount of data. Cloud computing is an emerging technology, which enables one to accomplish the aforementioned objective, leading towards improved business performance. It comprises of users requesting for the services of diverse applications from various distributed virtual servers. The cloud should provide resources on demand to its clients with high availability, scalability and with reduced cost. Load balancing is one of the essential factors to enhance the working performance of the cloud service provider. Since, cloud has inherited characteristic of distributed computing and virtualization there is a possibility of occurrence of deadlock. Hence, in this paper, a load balancing algorithm has been proposed to avoid deadlocks among the Virtual Machines (VMs) while processing the requests received from the users by VM migration. Further, this paper also provides the anticipated results with the implementation of the proposed algorithm. The deadlock avoidance enhances the number of jobs to be serviced by cloud service provider and thereby improving working performance and the business of the cloud service provider.Comment: 5 Pages, 4 Figures, 5 Table

    Load Balancing and Virtual Machine Allocation in Cloud-based Data Centers

    Get PDF
    As cloud services see an exponential increase in consumers, the demand for faster processing of data and a reliable delivery of services becomes a pressing concern. This puts a lot of pressure on the cloud-based data centers, where the consumers’ data is stored, processed and serviced. The rising demand for high quality services and the constrained environment, make load balancing within the cloud data centers a vital concern. This project aims to achieve load balancing within the data centers by means of implementing a Virtual Machine allocation policy, based on consensus algorithm technique. The cloud-based data center system, consisting of Virtual Machines has been simulated on CloudSim – a Java based cloud simulator

    Binary PSOGSA for Load Balancing Task Scheduling in Cloud Environment

    Full text link
    In cloud environments, load balancing task scheduling is an important issue that directly affects resource utilization. Unquestionably, load balancing scheduling is a serious aspect that must be considered in the cloud research field due to the significant impact on both the back end and front end. Whenever an effective load balance has been achieved in the cloud, then good resource utilization will also be achieved. An effective load balance means distributing the submitted workload over cloud VMs in a balanced way, leading to high resource utilization and high user satisfaction. In this paper, we propose a load balancing algorithm, Binary Load Balancing-Hybrid Particle Swarm Optimization and Gravitational Search Algorithm (Bin-LB-PSOGSA), which is a bio-inspired load balancing scheduling algorithm that efficiently enables the scheduling process to improve load balance level on VMs. The proposed algorithm finds the best Task-to-Virtual machine mapping that is influenced by the length of submitted workload and VM processing speed. Results show that the proposed Bin-LB-PSOGSA achieves better VM load average than the pure Bin-LB-PSO and other benchmark algorithms in terms of load balance level

    Open-Source Simulators for Cloud Computing: Comparative Study and Challenging Issues

    Full text link
    Resource scheduling in infrastructure as a service (IaaS) is one of the keys for large-scale Cloud applications. Extensive research on all issues in real environment is extremely difficult because it requires developers to consider network infrastructure and the environment, which may be beyond the control. In addition, the network conditions cannot be controlled or predicted. Performance evaluations of workload models and Cloud provisioning algorithms in a repeatable manner under different configurations are difficult. Therefore, simulators are developed. To understand and apply better the state-of-the-art of cloud computing simulators, and to improve them, we study four known open-source simulators. They are compared in terms of architecture, modeling elements, simulation process, performance metrics and scalability in performance. Finally, a few challenging issues as future research trends are outlined.Comment: 15 pages, 11 figures, accepted for publication in Journal: Simulation Modelling Practice and Theor

    Decentralized Edge-to-Cloud Load-balancing: Service Placement for the Internet of Things

    Full text link
    Internet of Things (IoT) requires a new processing paradigm that inherits the scalability of the cloud while minimizing network latency using resources closer to the network edge. Building up such flexibility within the edge-to-cloud continuum consisting of a distributed networked ecosystem of heterogeneous computing resources is challenging. Load-balancing for fog computing becomes a cornerstone for cost-effective system management and operations. This paper studies two optimization objectives and formulates a decentralized load-balancing problem for IoT service placement: (global) IoT workload balance and (local) quality of service, in terms of minimizing the cost of deadline violation, service deployment, and unhosted services. The proposed solution, EPOS Fog, introduces a decentralized multiagent system for collective learning that utilizes edge-to-cloud nodes to jointly balance the input workload across the network and minimize the costs involved in service execution. The agents locally generate possible assignments of requests to resources and then cooperatively select an assignment such that their combination maximizes edge utilization while minimizes service execution cost. Extensive experimental evaluation with realistic Google cluster workloads on various networks demonstrates the superior performance of EPOS Fog in terms of workload balance and quality of service, compared to approaches such as First Fit and exclusively Cloud-based. The findings demonstrate how distributed computational resources on the edge can be utilized more cost-effectively by harvesting collective intelligence.Comment: 16 pages and 15 figure

    New Trends in Parallel and Distributed Simulation: from Many-Cores to Cloud Computing

    Full text link
    Recent advances in computing architectures and networking are bringing parallel computing systems to the masses so increasing the number of potential users of these kinds of systems. In particular, two important technological evolutions are happening at the ends of the computing spectrum: at the "small" scale, processors now include an increasing number of independent execution units (cores), at the point that a mere CPU can be considered a parallel shared-memory computer; at the "large" scale, the Cloud Computing paradigm allows applications to scale by offering resources from a large pool on a pay-as-you-go model. Multi-core processors and Clouds both require applications to be suitably modified to take advantage of the features they provide. In this paper, we analyze the state of the art of parallel and distributed simulation techniques, and assess their applicability to multi-core architectures or Clouds. It turns out that most of the current approaches exhibit limitations in terms of usability and adaptivity which may hinder their application to these new computing architectures. We propose an adaptive simulation mechanism, based on the multi-agent system paradigm, to partially address some of those limitations. While it is unlikely that a single approach will work well on both settings above, we argue that the proposed adaptive mechanism has useful features which make it attractive both in a multi-core processor and in a Cloud system. These features include the ability to reduce communication costs by migrating simulation components, and the support for adding (or removing) nodes to the execution architecture at runtime. We will also show that, with the help of an additional support layer, parallel and distributed simulations can be executed on top of unreliable resources.Comment: Simulation Modelling Practice and Theory (SIMPAT), Elsevier, vol. 49 (December 2014

    FlexCloud: A Flexible and Extendible Simulator for Performance Evaluation of Virtual Machine Allocation

    Full text link
    Cloud Data centers aim to provide reliable, sustainable and scalable services for all kinds of applications. Resource scheduling is one of keys to cloud services. To model and evaluate different scheduling policies and algorithms, we propose FlexCloud, a flexible and scalable simulator that enables users to simulate the process of initializing cloud data centers, allocating virtual machine requests and providing performance evaluation for various scheduling algorithms. FlexCloud can be run on a single computer with JVM to simulate large scale cloud environments with focus on infrastructure as a service; adopts agile design patterns to assure the flexibility and extensibility; models virtual machine migrations which is lack in the existing tools; provides user-friendly interfaces for customized configurations and replaying. Comparing to existing simulators, FlexCloud has combining features for supporting public cloud providers, load-balance and energy-efficiency scheduling. FlexCloud has advantage in computing time and memory consumption to support large-scale simulations. The detailed design of FlexCloud is introduced and performance evaluation is provided

    Adaptive Event Dispatching in Serverless Computing Infrastructures

    Full text link
    Serverless computing is an emerging Cloud service model. It is currently gaining momentum as the next step in the evolution of hosted computing from capacitated machine virtualisation and microservices towards utility computing. The term "serverless" has become a synonym for the entirely resource-transparent deployment model of cloud-based event-driven distributed applications. This work investigates how adaptive event dispatching can improve serverless platform resource efficiency and contributes a novel approach that allows for better scaling and fitting of the platform's resource consumption to actual demand
    • …
    corecore