202 research outputs found

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    The Inter-cloud meta-scheduling

    Get PDF
    Inter-cloud is a recently emerging approach that expands cloud elasticity. By facilitating an adaptable setting, it purposes at the realization of a scalable resource provisioning that enables a diversity of cloud user requirements to be handled efficiently. This study’s contribution is in the inter-cloud performance optimization of job executions using metascheduling concepts. This includes the development of the inter-cloud meta-scheduling (ICMS) framework, the ICMS optimal schemes and the SimIC toolkit. The ICMS model is an architectural strategy for managing and scheduling user services in virtualized dynamically inter-linked clouds. This is achieved by the development of a model that includes a set of algorithms, namely the Service-Request, Service-Distribution, Service-Availability and Service-Allocation algorithms. These along with resource management optimal schemes offer the novel functionalities of the ICMS where the message exchanging implements the job distributions method, the VM deployment offers the VM management features and the local resource management system details the management of the local cloud schedulers. The generated system offers great flexibility by facilitating a lightweight resource management methodology while at the same time handling the heterogeneity of different clouds through advanced service level agreement coordination. Experimental results are productive as the proposed ICMS model achieves enhancement of the performance of service distribution for a variety of criteria such as service execution times, makespan, turnaround times, utilization levels and energy consumption rates for various inter-cloud entities, e.g. users, hosts and VMs. For example, ICMS optimizes the performance of a non-meta-brokering inter-cloud by 3%, while ICMS with full optimal schemes achieves 9% optimization for the same configurations. The whole experimental platform is implemented into the inter-cloud Simulation toolkit (SimIC) developed by the author, which is a discrete event simulation framework

    QoS-aware predictive workflow scheduling

    Full text link
    This research places the basis of QoS-aware predictive workflow scheduling. This research novel contributions will open up prospects for future research in handling complex big workflow applications with high uncertainty and dynamism. The results from the proposed workflow scheduling algorithm shows significant improvement in terms of the performance and reliability of the workflow applications

    Dynamic load balancing strategies in heterogeneous distributed system

    Get PDF
    Distributed heterogeneous computing is being widely applied to a variety of large size computational problems. This computational environments are consists of multiple het- erogeneous computing modules, these modules interact with each other to solve the prob-lem. Dynamic load balancing in distributed computing system is desirable because it is an important key to establish dependability in a Heterogeneous Distributed Computing Systems (HDCS). Load balancing problem is an optimization problem with exponential solution space. The complexity of dynamic load balancing increases with the size of a HDCS and becomes difficult to solve effectively. The solution to this intractable problem is discussed under different algorithm paradigm.The load submitted to the a HDCS is assumed to be in the form of tasks. Dynamic allocation of n independent tasks to m computing nodes in heterogeneous distributed computing system can be possible through centralized or decentralized control. In central-ized approach,we have formulated load balancing problem considering task and machine heterogeneity as a linear programming problem to minimize the time by which all task completes the execution in makespan.The load balancing problem in HDCS aims to maintain a balanced allocation of tasks while using the computational resources. The system state changes with time on arrival of tasks from the users. Therefore,heterogeneous distributed system is modeled as an M/M/m queue. The task model is represented either as a consistent or an inconsistent expected time to compute (ETC) matrix. A batch mode heuristic has been used to de-sign dynamic load balancing algorithms for heterogeneous distributed computing systems with four different type of machine heterogeneity. A number of experiments have been conducted to study the performance of load balancing algorithms with three different ar-rival rate for the task. A better performance of the algorithms is observed with increasing of heterogeneity in the HDCS.A new codification scheme suitable to simulated annealing and genetic algorithm has been introduced to design dynamic load balancing algorithms for HDCS. These stochastic iterative load balancing algorithms uses sliding window techniques to select a batch of tasks, and allocate them to the computing nodes in the HDCS. The proposed dynamic genetic algorithm based load balancer has been found to be effective, especially in the case of a large number of tasks

    Virtual Organization Clusters: Self-Provisioned Clouds on the Grid

    Get PDF
    Virtual Organization Clusters (VOCs) provide a novel architecture for overlaying dedicated cluster systems on existing grid infrastructures. VOCs provide customized, homogeneous execution environments on a per-Virtual Organization basis, without the cost of physical cluster construction or the overhead of per-job containers. Administrative access and overlay network capabilities are granted to Virtual Organizations (VOs) that choose to implement VOC technology, while the system remains completely transparent to end users and non-participating VOs. Unlike alternative systems that require explicit leases, VOCs are autonomically self-provisioned according to configurable usage policies. As a grid computing architecture, VOCs are designed to be technology agnostic and are implementable by any combination of software and services that follows the Virtual Organization Cluster Model. As demonstrated through simulation testing and evaluation of an implemented prototype, VOCs are a viable mechanism for increasing end-user job compatibility on grid sites. On existing production grids, where jobs are frequently submitted to a small subset of sites and thus experience high queuing delays relative to average job length, the grid-wide addition of VOCs does not adversely affect mean job sojourn time. By load-balancing jobs among grid sites, VOCs can reduce the total amount of queuing on a grid to a level sufficient to counteract the performance overhead introduced by virtualization

    Information fusion architectures for security and resource management in cyber physical systems

    Get PDF
    Data acquisition through sensors is very crucial in determining the operability of the observed physical entity. Cyber Physical Systems (CPSs) are an example of distributed systems where sensors embedded into the physical system are used in sensing and data acquisition. CPSs are a collaboration between the physical and the computational cyber components. The control decisions sent back to the actuators on the physical components from the computational cyber components closes the feedback loop of the CPS. Since, this feedback is solely based on the data collected through the embedded sensors, information acquisition from the data plays an extremely vital role in determining the operational stability of the CPS. Data collection process may be hindered by disturbances such as system faults, noise and security attacks. Hence, simple data acquisition techniques will not suffice as accurate system representation cannot be obtained. Therefore, more powerful methods of inferring information from collected data such as Information Fusion have to be used. Information fusion is analogous to the cognitive process used by humans to integrate data continuously from their senses to make inferences about their environment. Data from the sensors is combined using techniques drawn from several disciplines such as Adaptive Filtering, Machine Learning and Pattern Recognition. Decisions made from such combination of data form the crux of information fusion and differentiates it from a flat structured data aggregation. In this dissertation, multi-layered information fusion models are used to develop automated decision making architectures to service security and resource management requirements in Cyber Physical Systems --Abstract, page iv

    Hybrid meta-heuristic algorithms for independent job scheduling in grid computing

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The term ’grid computing’ is used to describe an infrastructure that connects geographically distributed computers and heterogeneous platforms owned by multiple organizations allowing their computational power, storage capabilities and other resources to be selected and shared. The job scheduling problem is recognized as being one of the most important and challenging issues in grid computing environments. This paper proposes two strongly coupled hybrid meta-heuristic schedulers. The first scheduler combines Ant Colony Optimisation and Variable Neighbourhood Search in which the former acts as the primary algorithm which, during its execution, calls the latter as a supporting algorithm, while the second merges the Genetic Algorithm with Variable Neighbourhood Search in the same fashion. Several experiments were carried out to analyse the performance of the proposed schedulers in terms of minimizing the makespan using well known benchmarks. The experiments show that the proposed schedulers achieved impressive results compared to other selected approaches from the bibliography

    A Framework for Approximate Optimization of BoT Application Deployment in Hybrid Cloud Environment

    Get PDF
    We adopt a systematic approach to investigate the efficiency of near-optimal deployment of large-scale CPU-intensive Bag-of-Task applications running on cloud resources with the non-proportional cost to performance ratios. Our analytical solutions perform in both known and unknown running time of the given application. It tries to optimize users' utility by choosing the most desirable tradeoff between the make-span and the total incurred expense. We propose a schema to provide a near-optimal deployment of BoT application regarding users' preferences. Our approach is to provide user with a set of Pareto-optimal solutions, and then she may select one of the possible scheduling points based on her internal utility function. Our framework can cope with uncertainty in the tasks' execution time using two methods, too. First, an estimation method based on a Monte Carlo sampling called AA algorithm is presented. It uses the minimum possible number of sampling to predict the average task running time. Second, assuming that we have access to some code analyzer, code profiling or estimation tools, a hybrid method to evaluate the accuracy of each estimation tool in certain interval times for improving resource allocation decision has been presented. We propose approximate deployment strategies that run on hybrid cloud. In essence, proposed strategies first determine either an estimated or an exact optimal schema based on the information provided from users' side and environmental parameters. Then, we exploit dynamic methods to assign tasks to resources to reach an optimal schema as close as possible by using two methods. A fast yet simple method based on First Fit Decreasing algorithm, and a more complex approach based on the approximation solution of the transformed problem into a subset sum problem. Extensive experiment results conducted on a hybrid cloud platform confirm that our framework can deliver a near optimal solution respecting user's utility function
    • 

    corecore