64 research outputs found

    Meta-scheduling Issues in Interoperable HPCs, Grids and Clouds

    Get PDF
    Over the last years, interoperability among resources has been emerged as one of the most challenging research topics. However, the commonality of the complexity of the architectures (e.g., heterogeneity) and the targets that each computational paradigm including HPC, grids and clouds aims to achieve (e.g., flexibility) remain the same. This is to efficiently orchestrate resources in a distributed computing fashion by bridging the gap among local and remote participants. Initially, this is closely related with the scheduling concept which is one of the most important issues for designing a cooperative resource management system, especially in large scale settings such as in grids and clouds. Within this context, meta-scheduling offers additional functionalities in the area of interoperable resource management, this is because of its great agility to handle sudden variations and dynamic situations in user demands. Accordingly, the case of inter-infrastructures, including InterCloud, entitle that the decentralised meta-scheduling scheme overcome issues like consolidated administration management, bottleneck and local information exposition. In this work, we detail the fundamental issues for developing an effective interoperable meta-scheduler for e-infrastructures in general and InterCloud in particular. Finally, we describe a simulation and experimental configuration based on real grid workload traces to demonstrate the interoperable setting as well as provide experimental results as part of a strategic plan for integrating future meta-schedulers

    A Survey on Meta-Heuristic Scheduling Optimization Techniques in Cloud Computing Environment

    Get PDF
    As cloud computing is turning out to be evident that the eventual fate of the cloud industry relies on interconnected cloud systems where the resources are probably going to be provided by various cloud service suppliers. Clouds are also seen as being multifaceted; if the user requires only computing capacity and wishes to personalize it as per his requirements, the infrastructure cloud suppliers are able to provide this convenience as virtual machines.Many optimized meta-heuristic scheduling techniques are introduced for scheduling of bag-of-tasks applications in heterogeneous framework of clouds.The overall analysis demonstrates that, utilizing different meta-heuristic techniques can offer noteworthy benefits in the terms of speed and performance

    An Inter-Cloud Meta-Scheduling (ICMS) simulation framework: architecture and evaluation

    Get PDF
    Inter-cloud is an approach that facilitates scalable resource provisioning across multiple cloud infrastructures. In this paper, we focus on the performance optimization of Infrastructure as a Service (IaaS) using the meta-scheduling paradigm to achieve an improved job scheduling across multiple clouds. We propose a novel inter-cloud job scheduling framework and implement policies to optimize performance of participating clouds. The framework, named as Inter-Cloud Meta-Scheduling (ICMS), is based on a novel message exchange mechanism to allow optimization of job scheduling metrics. The resulting system offers improved flexibility, robustness and decentralization. We implemented a toolkit named “Simulating the Inter-Cloud” (SimIC) to perform the design and implementation of different inter-cloud entities and policies in the ICMS framework. An experimental analysis is produced for job executions in inter-cloud and a performance is presented for a number of parameters such as job execution, makespan, and turnaround times. The results highlight that the overall performance of individual clouds for selected parameters and configuration is improved when these are brought together under the proposed ICMS framework

    Performance evaluation of interoperable micro-clouds

    Get PDF
    The Internet of Things (IoT) is defined as a paradigm transforming physical objects to smart objects that are inter- connected via Internet. Today, IoT objects offer embedded intelligence that can be powerful in case of fully integration of a collective manner towards the satisfaction of user needs. This work is based on the micro-clouds that are a new proposing paradigm to highlight the collective intelligence of IoT objects. Specifically, a micro-cloud could be seen as a pool of cooperated devices and their resources that form transient smart environments. Further to this, we anticipate that the inter-cloud model can expand the micro-cloud capabilities by allowing multiple micro-clouds to communicate in order to achieve a common aim. This will further push the boundaries for studying the interaction and synergetic collaborative nature between micro-cloud systems in terms of their interoperability and performance. However as the size of the system is increased the complexity of performance is additionally increased. This emphasizes the need for decentralization where resources are changing over time without any notice. The vision of this work is that micro-clouds shall be linked together to enable a full network of usable IoT objects and at the same time maintain the required quality of service from an end-user's perspective. Specifically, the aim is to identify the specific criteria which are the most relevant to optimize performance when several micro-clouds collaborate (e.g. load-balancing, throughput, turn-around times, utilization level, etc.) as well as classify their functional requirements. So the focus is on the performance analysis and evaluation of results based on a simulated specific use case scenario

    The Inter-cloud meta-scheduling

    Get PDF
    Inter-cloud is a recently emerging approach that expands cloud elasticity. By facilitating an adaptable setting, it purposes at the realization of a scalable resource provisioning that enables a diversity of cloud user requirements to be handled efficiently. This study’s contribution is in the inter-cloud performance optimization of job executions using metascheduling concepts. This includes the development of the inter-cloud meta-scheduling (ICMS) framework, the ICMS optimal schemes and the SimIC toolkit. The ICMS model is an architectural strategy for managing and scheduling user services in virtualized dynamically inter-linked clouds. This is achieved by the development of a model that includes a set of algorithms, namely the Service-Request, Service-Distribution, Service-Availability and Service-Allocation algorithms. These along with resource management optimal schemes offer the novel functionalities of the ICMS where the message exchanging implements the job distributions method, the VM deployment offers the VM management features and the local resource management system details the management of the local cloud schedulers. The generated system offers great flexibility by facilitating a lightweight resource management methodology while at the same time handling the heterogeneity of different clouds through advanced service level agreement coordination. Experimental results are productive as the proposed ICMS model achieves enhancement of the performance of service distribution for a variety of criteria such as service execution times, makespan, turnaround times, utilization levels and energy consumption rates for various inter-cloud entities, e.g. users, hosts and VMs. For example, ICMS optimizes the performance of a non-meta-brokering inter-cloud by 3%, while ICMS with full optimal schemes achieves 9% optimization for the same configurations. The whole experimental platform is implemented into the inter-cloud Simulation toolkit (SimIC) developed by the author, which is a discrete event simulation framework

    Cloud scheduling optimization: a reactive model to enable dynamic deployment of virtual machines instantiations

    Get PDF
    This study proposes a model for supporting the decision making process of the cloud policy for the deployment of virtual machines in cloud environments. We explore two configurations, the static case in which virtual machines are generated according to the cloud orchestration, and the dynamic case in which virtual machines are reactively adapted according to the job submissions, using migration, for optimizing performance time metrics. We integrate both solutions in the same simulator for measuring the performance of various combinations of virtual machines, jobs and hosts in terms of the average execution and total simulation time. We conclude that the dynamic configuration is prosperus as it offers optimized job execution performance

    High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    Get PDF
    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities

    High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    Get PDF
    The publication of this article was funded by SCOAP 3 . Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities
    • 

    corecore