7,465 research outputs found

    Dynamic Resource Scheduling in Cloud Data Center

    Get PDF
    Cloud infrastructure provides a wide range of resources and services to companies and organizations, such as computation, storage, database, platforms, etc. These resources and services are used to power up and scale out tenants' workloads and meet their specified service level agreements (SLA). With the various kinds and characteristics of its workloads, an important problem for cloud provider is how to allocate it resource among the requests. An efficient resource scheduling scheme should be able to benefit both the cloud provider and also the cloud users. For the cloud provider, the goal of the scheduling algorithm is to improve the throughput and the job completion rate of the cloud data center under the stress condition or to use less physical machines to support all incoming jobs under the overprovisioning condition. For the cloud users, the goal of scheduling algorithm is to guarantee the SLAs and satisfy other job specified requirements. Furthermore, since in a cloud data center, jobs would arrive and leave very frequently, hence, it is critical to make the scheduling decision within a reasonable time. To improve the efficiency of the cloud provider, the scheduling algorithm needs to jointly reduce the inter-VM and intra-VM fragments, which means to consider the scheduling problem with regard to both the cloud provider and the users. This thesis address the cloud scheduling problem from both the cloud provider and the user side. Cloud data centers typically require tenants to specify the resource demands for the virtual machines (VMs) they create using a set of pre-defined, fixed configurations, to ease the resource allocation problem. However, this approach could lead to low resource utilization of cloud data centers as tenants are obligated to conservatively predict the maximum resource demand of their applications. In addition to that, users are at an inferior position of estimating the VM demands without knowing the multiplexing techniques of the cloud provider. Cloud provider, on the other hand, has a better knowledge at selecting the VM sets for the submitted applications. The scheduling problem is even severe for the mobile user who wants to use the cloud infrastructure to extend his/her computation and battery capacity, where the response and scheduling time is tight and the transmission channel between mobile users and cloudlet is highly variable. This thesis investigates into the resource scheduling problem for both wired and mobile users in the cloud environment. The proposed resource allocation problem is studied in the methodology of problem modeling, trace analysis, algorithm design and simulation approach. The first aspect this thesis addresses is the VM scheduling problem. Instead of the static VM scheduling, this thesis proposes a finer-grained dynamic resource allocation and scheduling algorithm that can substantially improve the utilization of the data center resources by increasing the number of jobs accommodated and correspondingly, the cloud data center provider's revenue. The second problem this thesis addresses is joint VM set selection and scheduling problem. The basic idea is that there may exist multiple VM sets that can support an application's resource demand, and by elaborately select an appropriate VM set, the utilization of the data center can be improved without violating the application's SLA. The third problem addressed by the thesis is the mobile cloud resource scheduling problem, where the key issue is to find the most energy and time efficient way of allocating components of the target application given the current network condition and cloud resource usage status. The main contribution of this thesis are the followings. For the dynamic real-time scheduling problem, a constraint programming solution is proposed to schedule the long jobs, and simple heuristics are used to quickly, yet quite accurately schedule the short jobs. Trace-driven simulations shows that the overall revenue for the cloud provider can be improved by 30\% over the traditional static VM resource allocation based on the coarse granularity specifications. For the joint VM selection and scheduling problem, this thesis proposes an optimal online VM set selection scheme that satisfies the user resource demand and minimizes the number of activated physical machines. Trace driven simulation shows around 18\% improvement of the overall utility of the provider compared to Bazaar-I approach and more than 25\% compared to best-fit and first-fit. For the mobile cloud scheduling problem, a reservation-based joint code partition and resource scheduling algorithm is proposed by conservatively estimating the minimal resource demand and a polynomial time code partition algorithm is proposed to obtain the corresponding partition

    SAMI: Service-Based Arbitrated Multi-Tier Infrastructure for Mobile Cloud Computing

    Get PDF
    Mobile Cloud Computing (MCC) is the state-ofthe- art mobile computing technology aims to alleviate resource poverty of mobile devices. Recently, several approaches and techniques have been proposed to augment mobile devices by leveraging cloud computing. However, long-WAN latency and trust are still two major issues in MCC that hinder its vision. In this paper, we analyze MCC and discuss its issues. We leverage Service Oriented Architecture (SOA) to propose an arbitrated multi-tier infrastructure model named SAMI for MCC. Our architecture consists of three major layers, namely SOA, arbitrator, and infrastructure. The main strength of this architecture is in its multi-tier infrastructure layer which leverages infrastructures from three main sources of Clouds, Mobile Network Operators (MNOs), and MNOs' authorized dealers. On top of the infrastructure layer, an arbitrator layer is designed to classify Services and allocate them the suitable resources based on several metrics such as resource requirement, latency and security. Utilizing SAMI facilitate development and deployment of service-based platform-neutral mobile applications.Comment: 6 full pages, accepted for publication in IEEE MobiCC'12 conference, MobiCC 2012:IEEE Workshop on Mobile Cloud Computing, Beijing, Chin

    Resource provisioning in Science Clouds: Requirements and challenges

    Full text link
    Cloud computing has permeated into the information technology industry in the last few years, and it is emerging nowadays in scientific environments. Science user communities are demanding a broad range of computing power to satisfy the needs of high-performance applications, such as local clusters, high-performance computing systems, and computing grids. Different workloads are needed from different computational models, and the cloud is already considered as a promising paradigm. The scheduling and allocation of resources is always a challenging matter in any form of computation and clouds are not an exception. Science applications have unique features that differentiate their workloads, hence, their requirements have to be taken into consideration to be fulfilled when building a Science Cloud. This paper will discuss what are the main scheduling and resource allocation challenges for any Infrastructure as a Service provider supporting scientific applications
    corecore