11,081 research outputs found

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Computing server power modeling in a data center: survey,taxonomy and performance evaluation

    Full text link
    Data centers are large scale, energy-hungry infrastructure serving the increasing computational demands as the world is becoming more connected in smart cities. The emergence of advanced technologies such as cloud-based services, internet of things (IoT) and big data analytics has augmented the growth of global data centers, leading to high energy consumption. This upsurge in energy consumption of the data centers not only incurs the issue of surging high cost (operational and maintenance) but also has an adverse effect on the environment. Dynamic power management in a data center environment requires the cognizance of the correlation between the system and hardware level performance counters and the power consumption. Power consumption modeling exhibits this correlation and is crucial in designing energy-efficient optimization strategies based on resource utilization. Several works in power modeling are proposed and used in the literature. However, these power models have been evaluated using different benchmarking applications, power measurement techniques and error calculation formula on different machines. In this work, we present a taxonomy and evaluation of 24 software-based power models using a unified environment, benchmarking applications, power measurement technique and error formula, with the aim of achieving an objective comparison. We use different servers architectures to assess the impact of heterogeneity on the models' comparison. The performance analysis of these models is elaborated in the paper

    Software Defined Networks based Smart Grid Communication: A Comprehensive Survey

    Get PDF
    The current power grid is no longer a feasible solution due to ever-increasing user demand of electricity, old infrastructure, and reliability issues and thus require transformation to a better grid a.k.a., smart grid (SG). The key features that distinguish SG from the conventional electrical power grid are its capability to perform two-way communication, demand side management, and real time pricing. Despite all these advantages that SG will bring, there are certain issues which are specific to SG communication system. For instance, network management of current SG systems is complex, time consuming, and done manually. Moreover, SG communication (SGC) system is built on different vendor specific devices and protocols. Therefore, the current SG systems are not protocol independent, thus leading to interoperability issue. Software defined network (SDN) has been proposed to monitor and manage the communication networks globally. This article serves as a comprehensive survey on SDN-based SGC. In this article, we first discuss taxonomy of advantages of SDNbased SGC.We then discuss SDN-based SGC architectures, along with case studies. Our article provides an in-depth discussion on routing schemes for SDN-based SGC. We also provide detailed survey of security and privacy schemes applied to SDN-based SGC. We furthermore present challenges, open issues, and future research directions related to SDN-based SGC.Comment: Accepte

    Investigation of Cloud Scheduling Algorithms for Resource Utilization Using CloudSim

    Get PDF
    Compute Cloud comprises a distributed set of High-Performance Computing (HPC) machines to stipulate on-demand computing services to remote users over the internet. Clouds are capable enough to provide an optimal solution to address the ever-increasing computation and storage demands of large scientific HPC applications. To attain good computing performances, mapping of Cloud jobs to the compute resources is a very crucial process. Currently we can say that several efficient Cloud scheduling heuristics are available, however, selecting an appropriate scheduler for the given environment (i.e., jobs and machines heterogeneity) and scheduling objectives (such as minimized makespan, higher throughput, increased resource utilization, load balanced mapping, etc.) is still a difficult task. In this paper, we consider ten important scheduling heuristics (i.e., opportunistic load balancing algorithm, proactive simulation-based scheduling and load balancing, proactive simulation-based scheduling and enhanced load balancing, minimum completion time, Min-Min, load balance improved Min-Min, Max-Min, resource-aware scheduling algorithm, task-aware scheduling algorithm, and Sufferage) to perform an extensive empirical study to insight the scheduling mechanisms and the attainment of the major scheduling objectives. This study assumes that the Cloud job pool consists of a collection of independent and compute-intensive tasks that are statically scheduled to minimize the total execution time of a workload. The experiments are performed using two synthetic and one benchmark GoCJ workloads on a renowned Cloud simulator CloudSim. This empirical study presents a detailed analysis and insights into the circumstances requiring a load balanced scheduling mechanism to improve overall execution performance in terms of makespan, throughput, and resource utilization. The outcomes have revealed that the Sufferage and task-aware scheduling algorithm produce minimum makespan for the Cloud jobs. However, these two scheduling heuristics are not efficient enough to exploit the full computing capabilities of Cloud virtual machines

    Power Management Techniques for Data Centers: A Survey

    Full text link
    With growing use of internet and exponential growth in amount of data to be stored and processed (known as 'big data'), the size of data centers has greatly increased. This, however, has resulted in significant increase in the power consumption of the data centers. For this reason, managing power consumption of data centers has become essential. In this paper, we highlight the need of achieving energy efficiency in data centers and survey several recent architectural techniques designed for power management of data centers. We also present a classification of these techniques based on their characteristics. This paper aims to provide insights into the techniques for improving energy efficiency of data centers and encourage the designers to invent novel solutions for managing the large power dissipation of data centers.Comment: Keywords: Data Centers, Power Management, Low-power Design, Energy Efficiency, Green Computing, DVFS, Server Consolidatio

    Calidad de servicio en computación en la nube: técnicas de modelado y sus aplicaciones

    Get PDF
    Recent years have seen the massive migration of enterprise applications to the cloud. One of the challenges posed by cloud applications is Quality-of-Service (QoS) management, which is the problem of allocating resources to the application to guarantee a service level along dimensions such as performance, availability and reliability. This paper aims at supporting research in this area by providing a survey of the state of the art of QoS modeling approaches suitable for cloud systems. We also review and classify their early application to some decision-making problems arising in cloud QoS management

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    An In-Depth Empirical Investigation of State-of-the-Art Scheduling Approaches for Cloud Computing

    Get PDF
    Recently, Cloud computing has emerged as one of the widely used platforms to provide compute, storage and analytics services to end-users and organizations on a pay-as-you-use basis, with high agility, availability, scalability, and resiliency. This enables individuals and organizations to have access to a large pool of high processing resources without the need for establishing a high-performance computing (HPC) platform. From the past few years, task scheduling in Cloud computing is reckoned as eminent recourse for researchers. However, task scheduling is considered an NP-hard problem. In this research work, we investigate and empirically compare some of the most prominent state-of-the-art scheduling heuristics in terms of Makespan, Average resource utilization (ARUR), Throughput, and Energy consumption. The comparison is then extended by evaluating the approaches in terms of individual VM level load imbalance. After extensive simulation, the comparative analysis has revealed that Task Aware Scheduling Algorithm (TASA) and Proactive Simulation-based Scheduling and Load Balancing (PSSLB) outperformed as compared to the rest of the approaches and seems to be optimal choice keeping in view the trade-of between the complexities involved and the performance achieved concerning Makespan, Throughput, resource utilization, and Energy consumption
    corecore