11 research outputs found

    An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Network and Overloads

    Get PDF
    Energy consumption in cloud computing occur due to the unreasonable way in which tasks are scheduled. So energy aware task scheduling is a major concern in cloud computing as energy consumption results into significant waste of energy, reduce the profit margin and also high carbon emissions which is not environmentally sustainable. Hence, energy efficient task scheduling solutions are required to attain variable resource management, live migration, minimal virtual machine design, overall system efficiency, reduction in operating costs, increasing system reliability, and prompting environmental protection with minimal performance overhead. This paper provides a comprehensive overview of the energy efficient techniques and approaches and proposes the energy aware resource utilization framework to control traffic in cloud networks and overloads

    A hybrid algorithm to reduce energy consumption management in cloud data centers

    Get PDF
    There are several physical data centers in cloud environment with hundreds or thousands of computers. Virtualization is the key technology to make cloud computing feasible. It separates virtual machines in a way that each of these so-called virtualized machines can be configured on a number of hosts according to the type of user application. It is also possible to dynamically alter the allocated resources of a virtual machine. Different methods of energy saving in data centers can be divided into three general categories: 1) methods based on load balancing of resources; 2) using hardware facilities for scheduling; 3) considering thermal characteristics of the environment. This paper focuses on load balancing methods as they act dynamically because of their dependence on the current behavior of system. By taking a detailed look on previous methods, we provide a hybrid method which enables us to save energy through finding a suitable configuration for virtual machines placement and considering special features of virtual environments for scheduling and balancing dynamic loads by live migration method

    A Literature Survey on Resource Management Techniques, Issues and Challenges in Cloud Computing

    Get PDF
    Cloud computing is a large scale distributed computing which provides on demand services for clients. Cloud Clients use web browsers, mobile apps, thin clients, or terminal emulators to request and control their cloud resources at any time and anywhere through the network. As many companies are shifting their data to cloud and as many people are being aware of the advantages of storing data to cloud, there is increasing number of cloud computing infrastructure and large amount of data which lead to the complexity management for cloud providers. We surveyed the state-of-the-art resource management techniques for IaaS (infrastructure as a service) in cloud computing. Then we put forward different major issues in the deployment of the cloud infrastructure in order to avoid poor service delivery in cloud computing

    Multi-capacity combinatorial ordering GA in application to cloud resources allocation and efficient virtual machines consolidation

    Get PDF
    This paper describes a novel approach making use of genetic algorithms to find optimal solutions for multi-dimensional vector bin packing problems with the goal to improve cloud resource allocation and Virtual Machines (VMs) consolidation. Two algorithms, namely Combinatorial Ordering First-Fit Genetic Algorithm (COFFGA) and Combinatorial Ordering Next Fit Genetic Algorithm (CONFGA) have been developed for that and combined. The proposed hybrid algorithm targets to minimise the total number of running servers and resources wastage per server. The solutions obtained by the new algorithms are compared with latest solutions from literature. The results show that the proposed algorithm COFFGA outperforms other previous multi-dimension vector bin packing heuristics such as Permutation Pack (PP), First Fit (FF) and First Fit Decreasing (FFD) by 4%, 34%, and 39%, respectively. It also achieved better performance than the existing genetic algorithm for multi-capacity resources virtual machine consolidation (RGGA) in terms of performance and robustness. A thorough explanation for the improved performance of the newly proposed algorithm is given

    Flow Scheduling in Data Center Networks with Time and Energy Constraints: A Software-Defined Network Approach

    Get PDF
    Flow scheduling in Data Center Networks (DCN) is a hot topic as cloud computing and virtualization are becoming the dominant paradigm in the increasing demand of digital services. Within the cost of the DCN, the energy demands associated with the network infrastructure represent an important portion. When flows have temporal restrictions, the scheduling with path selection to reduce the number of active switching devices is a NP-hard problem as proven in the literature. In this paper, an heuristic approach to schedule real-time flows in data-centers is proposed, optimizing the temporal requirements while reducing the energy consumption in the network infrastructure via a proper selection of the paths. The experiments show good performance of the solutions found in relation to exact solution approximations based on an integer linear programming model. The possibility of programming the network switches allows the dynamic schedule of paths of flows under the software-defined network management.Fil: Fraga, Martin. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; ArgentinaFil: Micheletto, Matías Javier. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Llinas, Andres. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Santos, Rodrigo Martin. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Zabala, Paula Lorena. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentin

    Energy-efficient resource allocation scheme based on enhanced flower pollination algorithm for cloud computing data center

    Get PDF
    Cloud Computing (CC) has rapidly emerged as a successful paradigm for providing ICT infrastructure. Efficient and environmental-friendly resource allocation mechanisms, responsible for allocatinpg Cloud data center resources to execute user applications in the form of requests are undoubtedly required. One of the promising Nature-Inspired techniques for addressing virtualization, consolidation and energyaware problems is the Flower Pollination Algorithm (FPA). However, FPA suffers from entrapment and its static control parameters cannot maintain a balance between local and global search which could also lead to high energy consumption and inadequate resource utilization. This research developed an enhanced FPA-based energy efficient resource allocation scheme for Cloud data center which provides efficient resource utilization and energy efficiency with less probable Service Level Agreement (SLA) violations. Firstly, an Enhanced Flower Pollination Algorithm for Energy-Efficient Virtual Machine Placement (EFPA-EEVMP) was developed. In this algorithm, a Dynamic Switching Probability (DSP) strategy was adopted to balance the local and global search space in FPA used to minimize the energy consumption and maximize resource utilization. Secondly, Multi-Objective Hybrid Flower Pollination Resource Consolidation (MOH-FPRC) algorithm was developed. In this algorithm, Local Neighborhood Search (LNS) and Pareto optimisation strategies were combined with Clustering algorithm to avoid local trapping and address Cloud service providers conflicting objectives such as energy consumption and SLA violation. Lastly, Energy-Aware Multi-Cloud Flower Pollination Optimization (EAM-FPO) scheme was developed for distributed Multi-Cloud data center environment. In this scheme, Power Usage Effectiveness (PUE) and migration controller were utilised to obtain the optimal solution in a larger search space of the CC environment. The scheme was tested on MultiRecCloudSim simulator. Results of the simulation were compared with OEMACS, ACS-VMC, and EA-DP. The scheme produced outstanding performance improvement rate on the data center energy consumption by 20.5%, resource utilization by 23.9%, and SLA violation by 13.5%. The combined algorithms have reduced entrapment and maintaned balance between local and global search. Therefore, based on the findings the developed scheme has proven to be efficient in minimizing energy consumption while at the same time improving the data center resource allocation with minimum SLA violation

    SHARING WITH LIVE MIGRATION ENERGY OPTIMIZATION TASK SCHEDULER FOR CLOUD COMPUTING DATACENTRES

    Get PDF
    The use of cloud computing is expanding, and it is becoming the driver for innovation in all companies to serve their customers around the world. A big attention was drawn to the huge energy that was consumed within those datacentres recently neglecting the energy consumption in the rest of the cloud components. Therefore, the energy consumption should be reduced to minimize performance losses, achieve the target battery lifetime, satisfy performance requirements, minimize power consumption, minimize the CO2 emissions, maximize the profit, and maximize resource utilization. Reducing power consumption in the cloud computing datacentres can be achieved by many ways such as managing or utilizing the resources, controlling redundancy, relocating datacentres, improvement of applications or dynamic voltage and frequency scaling. One of the most efficient ways to reduce power is to use a scheduling technique that will find the best task execution order based on the users demands and with the minimum execution time and cloud resources. It is quite a challenge in cloud environment to design an effective and an efficient task scheduling technique which is done based on the user requirements. The scheduling process is not an easy task because within the datacentre there is dissimilar hardware with different capacities and, to improve the resource utilization, an efficient scheduling algorithm must be applied on the incoming tasks to achieve efficient computing resource allocating and power optimization. The scheduler must maintain the balance between the Quality of Service and fairness among the jobs so that the efficiency may be increased. The aim of this project is to propose a novel method for optimizing energy usage in cloud computing environments that satisfy the Quality of Service (QoS) and the regulations of the Service Level Agreement (SLA). Applying a power- and resource-optimised scheduling algorithm will assist to control and improve the process of mapping between the datacentre servers and the incoming tasks and achieve the optimal deployment of the data centre resources to achieve good computing efficiency, network load minimization and reducing the energy consumption in the datacentre. This thesis explores cloud computing energy aware datacentre structures with diverse scheduling heuristics and propose a novel job scheduling technique with sharing and live migration based on file locality (SLM) aiming to maximize efficiency and save power consumed in the datacentre due to bandwidth usage utilization, minimizing the processing time and the system total make span. The propose SLM energy efficient scheduling strategy have four basic algorithms: 1) Job Classifier, 2) SLM job scheduler, 3) Dual fold VM virtualization and 4) VM threshold margins and consolidation. The SLM job classifier worked on categorising the incoming set of user requests to the datacentre in to two different queues based on these requests type and the source file needed to process them. The processing time of each job fluctuate based on the job type and the number of instructions for each job. The second algorithm, which is the SLM scheduler algorithm, dispatch jobs from both queues according to job arrival time and control the allocation process to the most appropriate and available VM based on job similarity according to a predefined synchronized job characteristic table (SJC). The SLM scheduler uses a replicated host’s infrastructure to save the wasted idle hosts energy by maximizing the basic host’s utilization as long as the system can deal with workflow while setting replicated hosts on off mode. The third SLM algorithm, the dual fold VM algorithm, divide the active VMs in to a top and low level slots to allocate similar jobs concurrently which maximize the host utilization at high workload and reduce the total make span. The VM threshold margins and consolidation algorithm set an upper and lower threshold margin as a trigger for VMs consolidation and load balancing process among running VMs, and deploy a continuous provisioning of overload and underutilize VMs detection scheme to maintain and control the system workload balance. The consolidation and load balancing is achieved by performing a series of dynamic live migrations which provides auto-scaling for the servers with in the datacentres. This thesis begins with cloud computing overview then preview the conceptual cloud resources management strategies with classification of scheduling heuristics. Following this, a Competitive analysis of energy efficient scheduling algorithms and related work is presented. The novel SLM algorithm is proposed and evaluated using the CloudSim toolkit under number of scenarios, then the result compared to Particle Swarm Optimization algorithm (PSO) and Ant Colony Algorithm (ACO) shows a significant improvement in the energy usage readings levels and total make span time which is the total time needed to finish processing all the tasks
    corecore