109,781 research outputs found

    Dynamic Resource Scheduling in Cloud Data Center

    Get PDF
    Cloud infrastructure provides a wide range of resources and services to companies and organizations, such as computation, storage, database, platforms, etc. These resources and services are used to power up and scale out tenants' workloads and meet their specified service level agreements (SLA). With the various kinds and characteristics of its workloads, an important problem for cloud provider is how to allocate it resource among the requests. An efficient resource scheduling scheme should be able to benefit both the cloud provider and also the cloud users. For the cloud provider, the goal of the scheduling algorithm is to improve the throughput and the job completion rate of the cloud data center under the stress condition or to use less physical machines to support all incoming jobs under the overprovisioning condition. For the cloud users, the goal of scheduling algorithm is to guarantee the SLAs and satisfy other job specified requirements. Furthermore, since in a cloud data center, jobs would arrive and leave very frequently, hence, it is critical to make the scheduling decision within a reasonable time. To improve the efficiency of the cloud provider, the scheduling algorithm needs to jointly reduce the inter-VM and intra-VM fragments, which means to consider the scheduling problem with regard to both the cloud provider and the users. This thesis address the cloud scheduling problem from both the cloud provider and the user side. Cloud data centers typically require tenants to specify the resource demands for the virtual machines (VMs) they create using a set of pre-defined, fixed configurations, to ease the resource allocation problem. However, this approach could lead to low resource utilization of cloud data centers as tenants are obligated to conservatively predict the maximum resource demand of their applications. In addition to that, users are at an inferior position of estimating the VM demands without knowing the multiplexing techniques of the cloud provider. Cloud provider, on the other hand, has a better knowledge at selecting the VM sets for the submitted applications. The scheduling problem is even severe for the mobile user who wants to use the cloud infrastructure to extend his/her computation and battery capacity, where the response and scheduling time is tight and the transmission channel between mobile users and cloudlet is highly variable. This thesis investigates into the resource scheduling problem for both wired and mobile users in the cloud environment. The proposed resource allocation problem is studied in the methodology of problem modeling, trace analysis, algorithm design and simulation approach. The first aspect this thesis addresses is the VM scheduling problem. Instead of the static VM scheduling, this thesis proposes a finer-grained dynamic resource allocation and scheduling algorithm that can substantially improve the utilization of the data center resources by increasing the number of jobs accommodated and correspondingly, the cloud data center provider's revenue. The second problem this thesis addresses is joint VM set selection and scheduling problem. The basic idea is that there may exist multiple VM sets that can support an application's resource demand, and by elaborately select an appropriate VM set, the utilization of the data center can be improved without violating the application's SLA. The third problem addressed by the thesis is the mobile cloud resource scheduling problem, where the key issue is to find the most energy and time efficient way of allocating components of the target application given the current network condition and cloud resource usage status. The main contribution of this thesis are the followings. For the dynamic real-time scheduling problem, a constraint programming solution is proposed to schedule the long jobs, and simple heuristics are used to quickly, yet quite accurately schedule the short jobs. Trace-driven simulations shows that the overall revenue for the cloud provider can be improved by 30\% over the traditional static VM resource allocation based on the coarse granularity specifications. For the joint VM selection and scheduling problem, this thesis proposes an optimal online VM set selection scheme that satisfies the user resource demand and minimizes the number of activated physical machines. Trace driven simulation shows around 18\% improvement of the overall utility of the provider compared to Bazaar-I approach and more than 25\% compared to best-fit and first-fit. For the mobile cloud scheduling problem, a reservation-based joint code partition and resource scheduling algorithm is proposed by conservatively estimating the minimal resource demand and a polynomial time code partition algorithm is proposed to obtain the corresponding partition

    Evaluating Resilience of Electricity Distribution Networks via A Modification of Generalized Benders Decomposition Method

    Full text link
    This paper presents a computational approach to evaluate the resilience of electricity Distribution Networks (DNs) to cyber-physical failures. In our model, we consider an attacker who targets multiple DN components to maximize the loss of the DN operator. We consider two types of operator response: (i) Coordinated emergency response; (ii) Uncoordinated autonomous disconnects, which may lead to cascading failures. To evaluate resilience under response (i), we solve a Bilevel Mixed-Integer Second-Order Cone Program which is computationally challenging due to mixed-integer variables in the inner problem and non-convex constraints. Our solution approach is based on the Generalized Benders Decomposition method, which achieves a reasonable tradeoff between computational time and solution accuracy. Our approach involves modifying the Benders cut based on structural insights on power flow over radial DNs. We evaluate DN resilience under response (ii) by sequentially computing autonomous component disconnects due to operating bound violations resulting from the initial attack and the potential cascading failures. Our approach helps estimate the gain in resilience under response (i), relative to (ii)

    Using Battery Storage for Peak Shaving and Frequency Regulation: Joint Optimization for Superlinear Gains

    Full text link
    We consider using a battery storage system simultaneously for peak shaving and frequency regulation through a joint optimization framework which captures battery degradation, operational constraints and uncertainties in customer load and regulation signals. Under this framework, using real data we show the electricity bill of users can be reduced by up to 15\%. Furthermore, we demonstrate that the saving from joint optimization is often larger than the sum of the optimal savings when the battery is used for the two individual applications. A simple threshold real-time algorithm is proposed and achieves this super-linear gain. Compared to prior works that focused on using battery storage systems for single applications, our results suggest that batteries can achieve much larger economic benefits than previously thought if they jointly provide multiple services.Comment: To Appear in IEEE Transaction on Power System

    Multi-capacity bin packing with dependent items and its application to the packing of brokered workloads in virtualized environments

    Full text link
    Providing resource allocation with performance predictability guarantees is increasingly important in cloud platforms, especially for data-intensive applications, in which performance depends greatly on the available rates of data transfer between the various computing/storage hosts underlying the virtualized resources assigned to the application. Existing resource allocation solutions either assume that applications manage their data transfer between their virtualized resources, or that cloud providers manage their internal networking resources. With the increased prevalence of brokerage services in cloud platforms, there is a need for resource allocation solutions that provides predictability guarantees in settings, in which neither application scheduling nor cloud provider resources can be managed/controlled by the broker. This paper addresses this problem, as we define the Network-Constrained Packing (NCP) problem of finding the optimal mapping of brokered resources to applications with guaranteed performance predictability. We prove that NCP is NP-hard, and we define two special instances of the problem, for which exact solutions can be found efficiently. We develop a greedy heuristic to solve the general instance of the NCP problem , and we evaluate its efficiency using simulations on various application workloads, and network models.This work was done while author was at Boston University. It was partially supported by NSF CISE awards #1430145, #1414119, #1239021 and #1012798. (1430145 - NSF CISE; 1414119 - NSF CISE; 1239021 - NSF CISE; 1012798 - NSF CISE

    Harnessing Flexible and Reliable Demand Response Under Customer Uncertainties

    Full text link
    Demand response (DR) is a cost-effective and environmentally friendly approach for mitigating the uncertainties in renewable energy integration by taking advantage of the flexibility of customers' demands. However, existing DR programs suffer from either low participation due to strict commitment requirements or not being reliable in voluntary programs. In addition, the capacity planning for energy storage/reserves is traditionally done separately from the demand response program design, which incurs inefficiencies. Moreover, customers often face high uncertainties in their costs in providing demand response, which is not well studied in literature. This paper first models the problem of joint capacity planning and demand response program design by a stochastic optimization problem, which incorporates the uncertainties from renewable energy generation, customer power demands, as well as the customers' costs in providing DR. We propose online DR control policies based on the optimal structures of the offline solution. A distributed algorithm is then developed for implementing the control policies without efficiency loss. We further offer enhanced policy design by allowing flexibilities into the commitment level. We perform real world trace based numerical simulations. Results demonstrate that the proposed algorithms can achieve near optimal social costs, and significant social cost savings compared to baseline methods

    Network-constrained packing of brokered workloads in virtualized environments

    Full text link
    Providing resource allocation with performance predictability guarantees is increasingly important in cloud platforms, especially for data-intensive applications, in which performance depends greatly on the available rates of data transfer between the various computing/storage hosts underlying the virtualized resources assigned to the application. Existing resource allocation solutions either assume that applications manage their data transfer between their virtualized resources, or that cloud providers manage their internal networking resources.With the increased prevalence of brokerage services in cloud platforms, there is a need for resource allocation solutions that provides predictability guarantees in settings, in which neither application scheduling nor cloud provider resources can be managed/controlled by the broker. This paper addresses this problem, as we define the Network-Constrained Packing (NCP)problem of finding the optimal mapping of brokered resources to applications with guaranteed performance predictability. We prove that NCP is NP-hard, and we define two special instances of the problem, for which exact solutions can be found efficiently. We develop a greedy heuristic to solve the general instance of the NCP problem, and we evaluate its efficiency using simulations on various application workloads, and network models.This work is supported by NSF CISE CNS Award #1347522, # 1239021, # 1012798
    corecore