6,576 research outputs found

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Cloud engineering is search based software engineering too

    Get PDF
    Many of the problems posed by the migration of computation to cloud platforms can be formulated and solved using techniques associated with Search Based Software Engineering (SBSE). Much of cloud software engineering involves problems of optimisation: performance, allocation, assignment and the dynamic balancing of resources to achieve pragmatic trade-offs between many competing technical and business objectives. SBSE is concerned with the application of computational search and optimisation to solve precisely these kinds of software engineering challenges. Interest in both cloud computing and SBSE has grown rapidly in the past five years, yet there has been little work on SBSE as a means of addressing cloud computing challenges. Like many computationally demanding activities, SBSE has the potential to benefit from the cloud; ‘SBSE in the cloud’. However, this paper focuses, instead, of the ways in which SBSE can benefit cloud computing. It thus develops the theme of ‘SBSE for the cloud’, formulating cloud computing challenges in ways that can be addressed using SBSE

    Intelligent Management and Efficient Operation of Big Data

    Get PDF
    This chapter details how Big Data can be used and implemented in networking and computing infrastructures. Specifically, it addresses three main aspects: the timely extraction of relevant knowledge from heterogeneous, and very often unstructured large data sources, the enhancement on the performance of processing and networking (cloud) infrastructures that are the most important foundational pillars of Big Data applications or services, and novel ways to efficiently manage network infrastructures with high-level composed policies for supporting the transmission of large amounts of data with distinct requisites (video vs. non-video). A case study involving an intelligent management solution to route data traffic with diverse requirements in a wide area Internet Exchange Point is presented, discussed in the context of Big Data, and evaluated.Comment: In book Handbook of Research on Trends and Future Directions in Big Data and Web Intelligence, IGI Global, 201

    Disaster Recovery Services in Intercloud using Genetic Algorithm Load Balancer

    Get PDF
    Paradigm need to shifts from cloud computing to intercloud for disaster recoveries, which can outbreak anytime and anywhere. Natural disaster treatment includes radically high voluminous impatient job request demanding immediate attention. Under the disequilibrium circumstance, intercloud is more practical and functional option. There are need of protocols like quality of services, service level agreement and disaster recovery pacts to be discussed and clarified during the initial setup to fast track the distress scenario. Orchestration of resources in large scale distributed system having muli-objective optimization of resources, minimum energy consumption, maximum throughput, load balancing, minimum carbon footprint altogether is quite challenging. Intercloud where resources of different clouds are in align, plays crucial role in resource mapping. The objective of this paper is to improvise and fast track the mapping procedures in cloud platform and addressing impatient job requests in balanced and efficient manner. Genetic algorithm based resource allocation is proposed using pareto optimal mapping of resources to keep high utilization rate of processors, high througput and low carbon footprint.  Decision variables include utilization of processors, throughput, locality cost and real time deadline. Simulation results of load balancer using first in first out and genetic algorithm are compared under similar circumstances

    A Multi-objective Optimization Model for Virtual Machine Mapping in Cloud Data Centres

    Full text link
    © 2016 IEEE. Modern cloud computing environments exploit virtualization for efficient resource management to reduce computational cost and energy budget. Virtual machine (VM) migration is a technique that enables flexible resource allocation and increases the computation power and communication capability within cloud data centers. VM migration helps cloud providers to successfully achieve various resource management objectives such as load balancing, power management, fault tolerance, and system maintenance. However, the VM migration process can affect the performance of applications unless it is supported by smart optimization methods. This paper presents a multi-objective optimization model to address this issue. The objectives are to minimize power consumption, maximize resource utilization (or minimize idle resources), and minimize VM transfer time. Fuzzy particle swarm optimization (PSO), which improves the efficiency of conventional PSO by using fuzzy logic systems, is relied upon to solve the optimization problem. The model is implemented in a cloud simulator to investigate its performance, and the results verify the performance improvement of the proposed model

    Improving Structural Features Prediction in Protein Structure Modeling

    Get PDF
    Proteins play a vital role in the biological activities of all living species. In nature, a protein folds into a specific and energetically favorable three-dimensional structure which is critical to its biological function. Hence, there has been a great effort by researchers in both experimentally determining and computationally predicting the structures of proteins. The current experimental methods of protein structure determination are complicated, time-consuming, and expensive. On the other hand, the sequencing of proteins is fast, simple, and relatively less expensive. Thus, the gap between the number of known sequences and the determined structures is growing, and is expected to keep expanding. In contrast, computational approaches that can generate three-dimensional protein models with high resolution are attractive, due to their broad economic and scientific impacts. Accurately predicting protein structural features, such as secondary structures, disulfide bonds, and solvent accessibility is a critical intermediate step stone to obtain correct three-dimensional models ultimately. In this dissertation, we report a set of approaches for improving the accuracy of structural features prediction in protein structure modeling. First of all, we derive a statistical model to generate context-based scores characterizing the favorability of segments of residues in adopting certain structural features. Then, together with other information such as evolutionary and sequence information, we incorporate the context-based scores in machine learning approaches to predict secondary structures, disulfide bonds, and solvent accessibility. Furthermore, we take advantage of the emerging high performance computing architectures in GPU to accelerate the calculation of pairwise and high-order interactions in context-based scores. Finally, we make these prediction methods available to the public via web services and software packages

    Power Management Techniques for Data Centers: A Survey

    Full text link
    With growing use of internet and exponential growth in amount of data to be stored and processed (known as 'big data'), the size of data centers has greatly increased. This, however, has resulted in significant increase in the power consumption of the data centers. For this reason, managing power consumption of data centers has become essential. In this paper, we highlight the need of achieving energy efficiency in data centers and survey several recent architectural techniques designed for power management of data centers. We also present a classification of these techniques based on their characteristics. This paper aims to provide insights into the techniques for improving energy efficiency of data centers and encourage the designers to invent novel solutions for managing the large power dissipation of data centers.Comment: Keywords: Data Centers, Power Management, Low-power Design, Energy Efficiency, Green Computing, DVFS, Server Consolidatio
    • …
    corecore