20 research outputs found

    Dynamic scheduling based on particle swarm optimization for cloud-based scientific experiments

    Get PDF
    Los Experimentos de Barrido de Parámetros (PSEs) permiten a los científicos llevar a cabo simulaciones mediante la ejecución de un mismo código con diferentes entradas de datos, lo cual genera una gran cantidad de trabajos intensivos en CPU que para ser ejecutados es necesario utilizar entornos de cómputo paralelos. Un ejemplo de este tipo de entornos son las Infraestructura como un Servicio (IaaS) de Cloud, las cuales ofrecen máquinas virtuales (VM) personalizables que son asignadas a máquinas físicas disponibles para luego ejecutar los trabajos. Además, es importante planificar correctamente la asignación de las máquinas físicas del Cloud, y por lo tanto es necesario implementar estrategias eficientes de planificación para asignar adecuadamente las VMs en las máquinas físicas. Una planificación eficiente constituye un desafío, debido a que es un problema NP-Completo. En este trabajo describimos y evaluamos un planificador Cloud basado en Optimización por Enjambre de Partículas (PSO). Las métricas principales de rendimiento a estudiar son el número de usuarios que el planificador es capáz de servir exitosamente y el número total de VMs creadas en un escenario online (no por lotes). Además, en este trabajo se evalúa el número de mensajes enviados a través de la red. Los experimentos son realizados mediante el uso del simulador CloudSim y datos de trabajos de problemas científicos reales. Los resultados muestran que nuestro planificador logra el mejor rendimiento respecto de las métricas estudiadas con respecto a una asignación random y algoritmos genéticos. En este trabajo también evaluamos el rendimiento, a través de las métricas propuestas, cuando se provee al planificador información cualitativa relacionada a la longitud de los trabajos o no se provee la misma.Parameter Sweep Experiments (PSEs) allow scientists to perform simulations by running the same code with different input data, which results in many CPU-intensive jobs, and hence parallel computing environments must be used. Within these, Infrastructure as a Service (IaaS) Clouds offer custom Virtual Machines (VM) that are launched in appropriate hosts available in a Cloud to handle such jobs. Then, correctly scheduling Cloud hosts is very important and thus efficient scheduling strategies to appropriately allocate VMs to physical resources must be developed. Scheduling is however challenging due to its inherent NP-completeness. We describe and evaluate a Cloud scheduler based on Particle Swarm Optimization (PSO). The main performance metrics to study are the number of Cloud users that the scheduler is able to successfully serve, and the total number of created VMs, in online (non-batch) scheduling scenarios. Besides, the number of intra-Cloud network messages sent are evaluated. Simulated experiments performedusing CloudSim and job data from real scientific problems show that our scheduler achieves better performance than schedulers based on Random assignment and Genetic Algorithms. We also study the performance when supplying or not job information to the schedulers, namely a qualitative indication of job length.Fil: Pacini Naumovich, Elina Rocío. Universidad Nacional de Cuyo. Instituto de Tecnologías de la Información y las Comunicaciones; ArgentinaFil: Mateos Diaz, Cristian Maximiliano. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Tandil. Instituto Superior de Ingenieria del Software; ArgentinaFil: Garcia Garino, Carlos Gabriel. Universidad Nacional de Cuyo. Instituto de Tecnologías de la Información y las Comunicaciones; Argentin

    Hybrid Meta-heuristic Algorithms for Static and Dynamic Job Scheduling in Grid Computing

    Get PDF
    The term ’grid computing’ is used to describe an infrastructure that connects geographically distributed computers and heterogeneous platforms owned by multiple organizations allowing their computational power, storage capabilities and other resources to be selected and shared. Allocating jobs to computational grid resources in an efficient manner is one of the main challenges facing any grid computing system; this allocation is called job scheduling in grid computing. This thesis studies the application of hybrid meta-heuristics to the job scheduling problem in grid computing, which is recognized as being one of the most important and challenging issues in grid computing environments. Similar to job scheduling in traditional computing systems, this allocation is known to be an NPhard problem. Meta-heuristic approaches such as the Genetic Algorithm (GA), Variable Neighbourhood Search (VNS) and Ant Colony Optimisation (ACO) have all proven their effectiveness in solving different scheduling problems. However, hybridising two or more meta-heuristics shows better performance than applying a stand-alone approach. The new high level meta-heuristic will inherit the best features of the hybridised algorithms, increasing the chances of skipping away from local minima, and hence enhancing the overall performance. In this thesis, the application of VNS for the job scheduling problem in grid computing is introduced. Four new neighbourhood structures, together with a modified local search, are proposed. The proposed VNS is hybridised using two meta-heuristic methods, namely GA and ACO, in loosely and strongly coupled fashions, yielding four new sequential hybrid meta-heuristic algorithms for the problem of static and dynamic single-objective independent batch job scheduling in grid computing. For the static version of the problem, several experiments were carried out to analyse the performance of the proposed schedulers in terms of minimising the makespan using well known benchmarks. The experiments show that the proposed schedulers achieved impressive results compared to other traditional, heuristic and meta-heuristic approaches selected from the bibliography. To model the dynamic version of the problem, a simple simulator, which uses the rescheduling technique, is designed and new problem instances are generated, by using a well-known methodology, to evaluate the performance of the proposed hybrid schedulers. The experimental results show that the use of rescheduling provides significant improvements in terms of the makespan compared to other non-rescheduling approaches

    Ant Colony Optimization Algorithm to Dynamic Energy Management in Cloud Data Center

    Get PDF
    With the wide deployment of cloud computing data centers, the problems of power consumption have become increasingly prominent. The dynamic energy management problem in pursuit of energy-efficiency in cloud data centers is investigated. Specifically, a dynamic energy management system model for cloud data centers is built, and this system is composed of DVS Management Module, Load Balancing Module, and Task Scheduling Module. According to Task Scheduling Module, the scheduling process is analyzed by Stochastic Petri Net, and a task-oriented resource allocation method (LET-ACO) is proposed, which optimizes the running time of the system and the energy consumption by scheduling tasks. Simulation studies confirm the effectiveness of the proposed system model. And the simulation results also show that, compared to ACO, Min-Min, and RR scheduling strategy, the proposed LET-ACO method can save up to 28%, 31%, and 40% energy consumption while meeting performance constraints

    Responsive Multi-objective Load Balancing Transformation Using Particle Swarm Optimization in Cloud Environment

    Get PDF
    Cloud computing is an emerging computing paradigm with a large collection of heterogeneous autonomous systems with flexible computational architecture which provides the customers with computing resources as a service over a network on their demand. A multi-objective nature is inherent in cloud resource scheduling, as the objectives of cloud providers, cloud users, and other stakeholders can be independent. Resource allocation among multiple clients has to be ensured as per service level agreements. Several techniques have been invented and tested by research community for generation of optimal schedules in cloud computing. To accomplish these goals and achieve high performance, it is important to design and develop a Responsive multi-objective load balancing Transformation algorithm (RMOLBT) based on abstraction in multi cloud environment. It is most challenging to schedule the tasks along with satisfying the user’s Quality of Service requirements. This paper proposes a  wide variety of task scheduling and resource utilization using Particle swarm Optimization (PSO) in cloud environment. The result obtained by RMOLBT was simulated by an open source cloudsim configured with test case specification. Finally, the results  demonstrate the suitability of the proposed scheme that will increase throughput, reduce waiting time, reduction in missed process considerably and balances load among the physical machines in a Data centre in multi cloud environment

    Computational mechanics software as a service project

    Get PDF
    Cloud computing promises great opportunities for the execution of scienti c and engineering applications. However, the execution of such kind of applications over Cloud infrastructures requires the accomplishment of many complex processes. In this paper we present a Computational Mechanics Software as a Service (SaaS) project which will allow scientists to easily con gure and submit their experiments to be transparently executed on the Cloud. For this purpose, a nite element software called SOGDE is used to perform parametric studies of computational mechanics on the basis of underlying computing resources. Moreover, a web service provides an interface for the abovementioned functionalities allowing the remote execution of scienti c applications in a simple way.Facultad de Informátic

    Multiobjective Optimization in Cloud Brokering Systems for Connected Internet of Things

    Get PDF
    Currently, over nine billion things are connected in the Internet of Things (IoT). This number is expected to exceed 20 billion in the near future, and the number of things is quickly increasing, indicating that numerous data will be generated. It is necessary to build an infrastructure to manage the connected things. Cloud computing (CC) has become important in terms of analysis and data storage for IoT. In this paper, we consider a cloud broker, which is an intermediary in the infrastructure that manages the connected things in CC. We study an optimization problem for maximizing the profit of the broker while minimizing the response time of the request and the energy consumption. A multiobjective particle swarm optimization (MOPSO) is proposed to solve the problem. The performance of the proposed MOPSO is compared with that of a genetic algorithm and a random search algorithm. The results show that the MOPSO outperforms a well-known genetic algorithm for multiobjective optimization

    Ant colony optimization (ACO) in scheduling overlapping architectural design activities

    Get PDF
    The increasing complexity of architectural design works refers to the need for high quality design solu­tions for overlapping activities through a shorter time period. Conventional network analysis techniques such as CPM could only represent sequential processes yet it is unable to handle a process which contains iterations so that it leads to the occurrence of unwanted omission of logic or information links between design activities. Ant Colony Optimiza­tion emerged as an efficient metaheuristic technique for solving computational problems in finding good paths through graphs. This research aims to develop an ACO based Design Activity Scheduling model (ACO-DAS) for the scheduling of overlapping architectural design activities and to test the workability of ACO-DAS through a hypothetical run. From the computational results of both CPM and ACO methods, the determination of critical path using ACO-DAS model resulted in a design duration at 50 while that for CPM was as long as 78. The durations of architectural design activi­ties have been significantly shortened by ACO-DAS. ACO-DAS results in shorter design completion time thus it deems more advanced than CPM

    Service-Level-Driven Load Scheduling and Balancing in Multi-Tier Cloud Computing

    Get PDF
    Cloud computing environments often deal with random-arrival computational workloads that vary in resource requirements and demand high Quality of Service (QoS) obligations. A Service Level Agreement (SLA) is employed to govern the QoS obligations of the cloud service provider to the client. A service provider conundrum revolves around the desire to maintain a balance between the limited resources available for computing and the high QoS requirements of the varying random computing demands. Any imbalance in managing these conflicting objectives may result in either dissatisfied clients that can incur potentially significant commercial penalties, or an over-sourced cloud computing environment that can be significantly costly to acquire and operate. To optimize response to such client demands, cloud service providers organize the cloud computing environment as a multi-tier architecture. Each tier executes its designated tasks and passes them to the next tier, in a fashion similar, but not identical, to the traditional job-shop environments. Each tier consists of multiple computing resources, though an optimization process must take place to assign and schedule the appropriate tasks of the job on the resources of the tier, so as to meet the job’s QoS expectations. Thus, scheduling the clients’ workloads as they arrive at the multi-tier cloud environment to ensure their timely execution has been a central issue in cloud computing. Various approaches have been presented in the literature to address this problem: Join-Shortest-Queue (JSQ), Join-Idle-Queue (JIQ), enhanced Round Robin (RR) and Least Connection (LC), as well as enhanced MinMin and MaxMin, to name a few. This thesis presents a service-level-driven load scheduling and balancing framework for multi-tier cloud computing. A model is used to quantify the penalty a cloud service provider incurs as a function of the jobs’ total waiting time and QoS violations. This model facilitates penalty mitigation in situations of high demand and resource shortage. The framework accounts for multi-tier job execution dependencies in capturing QoS violation penalties as the client jobs progress through subsequent tiers, thus optimizing the performance at the multi-tier level. Scheduling and balancing operations are employed to distribute client jobs on resources such that the total waiting time and, hence, SLA violations of client jobs are minimized. Optimal job allocation and scheduling is an NP combinatorial problem. The dynamics of random job arrival make the optimality goal even harder to achieve and maintain as new jobs arrive at the environment. Thus, the thesis proposes a queue virtualization as an abstract that allows jobs to migrate between resources within a given tier, as well, altering the sequencing of job execution within a given resource, during the optimization process. Given the computational complexity of the job allocation and scheduling problem, a genetic algorithm is proposed to seek optimal solutions. The queue virtualization is proposed as a basis for defining chromosome structure and operations. As computing jobs tend to vary with respect to delay tolerance, two SLA scenarios are tackled, that is, equal cost of time delays and differentiated cost of time delays. Experimental work is conducted to investigate the performance of the proposed approach both at the tier and system level
    corecore