10 research outputs found

    EDZL Scheduling for Large-Scale Cyber Service on Real-Time Cloud

    Get PDF
    [[sponsorship]]IEEE Computer Society Technical Committee on Business Informatics Systems[[conferencetype]]國際[[conferencedate]]20111212~20111214[[booktype]]電子版[[iscallforpapers]]Y[[conferencelocation]]Irvine, US

    Cloud Resource Provisioning to Extend the Capacity of Local Resources in the Presence of Failures

    Full text link
    Abstract—In this paper, we investigate Cloud computing re-source provisioning to extend the computing capacity of local clusters in the presence of failures. We consider three steps in the resource provisioning including resource brokering, dispatch sequences, and scheduling. The proposed brokering strategy is based on the stochastic analysis of routing in distributed parallel queues and takes into account the response time of the Cloud provider and the local cluster while considering computing cost of both sides. Moreover, we propose dispatching with probabilistic and deterministic sequences to redirect requests to the resource providers. We also incorporate checkpointing in some well-known scheduling algorithms to provide a fault-tolerant environment. We propose two cost-aware and failure-aware provisioning poli-cies that can be utilized by an organization that operates a cluster managed by virtual machine technology and seeks to use resources from a public Cloud provider. Simulation results demonstrate that the proposed policies improve the response time of users ’ requests by a factor of 4.10 under a moderate load with a limited cost on a public Cloud

    A flexible simulation framework for processor scheduling algorithms in multicore systems.

    Get PDF
    In traditional uniprocessor systems, processor scheduling is the responsibility of the operating system. In high performance computing (HPC) domains that largely involve parallel processors, the responsibility of scheduling is usually left to the applications. So far, parallel computing has been confined to a small group of specialized HPC users. In this context, the hardware, operating system, and the applications have been mostly designed independently with minimal interactions. As the multicore processors are becoming the norm, parallel programming is expected to emerge as the mainstream software development approach. This new trend poses several challenges including performance, power management, system utilization, and predictable response. Such a demand is hard to meet without the cooperation from hardware, operating system, and applications. Particularly, an efficient scheduling of cores to the application threads is fundamentally important in assuring the above mentioned characteristics. We believe, operating system requires to take a larger responsibility in ensuring efficient multicore scheduling of application threads. To study the performance of a new scheduling algorithm for the future multicore systems with hundreds and thousands of cores, we need a flexible scheduling simulation testbed. Designing such a multicore scheduling simulation testbed and illustrating its functionality by studying some well known scheduling algorithms Linux and Solaris are the main contributions of this thesis. In addition to studying Linux and Solaris scheduling algorithms, we demonstrate the power, flexibility, and use of the proposed scheduling testbed by simulating two popular gang scheduling algorithms - adaptive first-come-first-served (AFCFS) and largest gang first served (LGFS). As a result of this performance study, we designed a new gang scheduling algorithm and we compared its performance with AFCFS. The proposed scheduling simulation testbed is developed using Java and expected to be released for public use.The original print copy of this thesis may be available here: http://wizard.unbc.ca/record=b180562

    Scientific Workflow Scheduling for Cloud Computing Environments

    Get PDF
    The scheduling of workflow applications consists of assigning their tasks to computer resources to fulfill a final goal such as minimizing total workflow execution time. For this reason, workflow scheduling plays a crucial role in efficiently running experiments. Workflows often have many discrete tasks and the number of different task distributions possible and consequent time required to evaluate each configuration quickly becomes prohibitively large. A proper solution to the scheduling problem requires the analysis of tasks and resources, production of an accurate environment model and, most importantly, the adaptation of optimization techniques. This study is a major step toward solving the scheduling problem by not only addressing these issues but also optimizing the runtime and reducing monetary cost, two of the most important variables. This study proposes three scheduling algorithms capable of answering key issues to solve the scheduling problem. Firstly, it unveils BaRRS, a scheduling solution that exploits parallelism and optimizes runtime and monetary cost. Secondly, it proposes GA-ETI, a scheduler capable of returning the number of resources that a given workflow requires for execution. Finally, it describes PSO-DS, a scheduler based on particle swarm optimization to efficiently schedule large workflows. To test the algorithms, five well-known benchmarks are selected that represent different scientific applications. The experiments found the novel algorithms solutions substantially improve efficiency, reducing makespan by 11% to 78%. The proposed frameworks open a path for building a complete system that encompasses the capabilities of a workflow manager, scheduler, and a cloud resource broker in order to offer scientists a single tool to run computationally intensive applications

    Dynamic Pricing Strategy for Maximizing Cloud Revenue

    Get PDF
    The unexpected growth, flexibility and dynamism of information technology (IT) over the last decade has radically altered the civilization lifestyle and this boom continues as yet. Many nations have been competing to be forefront of this technological revolution, quite embracing the opportunities created by the advancements in this field in order to boost economy growth and to increase the accomplishments of everyday’s life. Cloud computing is one of the most promising achievement of these advancements. However, it faces many challenges and barriers like any new industry. Managing and maximizing such a very complex system business revenue is of paramount importance. The wealth of the cloud protfolio comes from the proceeds of three main services: Infrastructure as a service (IaaS), Software as a service (SaaS), and Platform as a service (PaaS). The Infrastructure as a Service (IaaS) cloud industry that relies on leasing virtual machines (VMs) has a significant portion of business values. Therefore many enterprises show frantic effort to capture the largest portion through the introducing of many different pricing models to satisfy not merely customers’ demands but essentially providers’ requirements. Indeed, one of the most challenging requirements is finding the dynamic equilibrium between two conflicting phenomena: underutilization and surging congestion. Spot instance has been presented as an elegant solution to overcome these situations aiming to gain more profits. However, previous studies on recent spot pricing schemes reveal an artificial pricing policy that does not comply with the dynamic nature of these phenomena. In this thesis, we investigate dynamic pricing of stagnant resources so as to maximize cloud revenue. To achieve this task, we reveal the necessities and objectives that underlie the importance of adopting cloud providers to dynamic price model, analyze adopted dynamic pricing strategy for real cloud enterprises and create dynamic pricing model which could be a strategic pricing model for IaaS cloud providers to increase the marginal profit and also to overcome technical barriers simultaneously. First, we formulate the maximum expected reward under discrete finite-horizon Markovian decisions and characterize model properties under optimum controlling conditions. The initial approach manages one class but multiple fares of virtual machines. For this purpose, the proposed approach leverages Markov decision processes, a number of properties under optimum controlling conditions that characterize a model’s behaviour, and approximate stochastic dynamic programming using linear programming to create a practical model. Second, our seminal work directs us to explore the most sensitive factors that drive price dynamism and to mitigate the high dimensionality of such a large-scale problem through conducting column generation. More specifically we employ a decomposition approach. Third, we observe that most previous work tackled one class of virtual machines merely. Therefore, we extend our study to cover multiple classes of virtual machines. Intuitively, dynamic price of multiple classes model is much more efficient from one side but practically is more challenging from another side. Consequently, our approach of dynamic pricing can scale up or down the price efficiently and effectively according to stagnant resources and load threshold aims to maximize the IaaS cloud revenue
    corecore