5,611 research outputs found

    Resource Management In Cloud And Big Data Systems

    Get PDF
    Cloud computing is a paradigm shift in computing, where services are offered and acquired on demand in a cost-effective way. These services are often virtualized, and they can handle the computing needs of big data analytics. The ever-growing demand for cloud services arises in many areas including healthcare, transportation, energy systems, and manufacturing. However, cloud resources such as computing power, storage, energy, dollars for infrastructure, and dollars for operations, are limited. Effective use of the existing resources raises several fundamental challenges that place the cloud resource management at the heart of the cloud providers\u27 decision-making process. One of these challenges faced by the cloud providers is to provision, allocate, and price the resources such that their profit is maximized and the resources are utilized efficiently. In addition, executing large-scale applications in clouds may require resources from several cloud providers. Another challenge when processing data intensive applications is minimizing their energy costs. Electricity used in US data centers in 2010 accounted for about 2% of total electricity used nationwide. In addition, the energy consumed by the data centers is growing at over 15% annually, and the energy costs make up about 42% of the data centers\u27 operating costs. Therefore, it is critical for the data centers to minimize their energy consumption when offering services to customers. In this Ph.D. dissertation, we address these challenges by designing, developing, and analyzing mechanisms for resource management in cloud computing systems and data centers. The goal is to allocate resources efficiently while optimizing a global performance objective of the system (e.g., maximizing revenue, maximizing social welfare, or minimizing energy). We improve the state-of-the-art in both methodologies and applications. As for methodologies, we introduce novel resource management mechanisms based on mechanism design, approximation algorithms, cooperative game theory, and hedonic games. These mechanisms can be applied in cloud virtual machine (VM) allocation and pricing, cloud federation formation, and energy-efficient computing. In this dissertation, we outline our contributions and possible directions for future research in this field

    Towards business integration as a service 2.0 (BIaaS 2.0)

    Get PDF
    Cloud Computing Business Framework (CCBF) is a framework for designing and implementation of Could Computing solutions. This proposal focuses on how CCBF can help to address linkage in Cloud Computing implementations. This leads to the development of Business Integration as a Service 1.0 (BIaaS 1.0) allowing different services, roles and functionalities to work together in a linkage-oriented framework where the outcome of one service can be input to another, without the need to translate between domains or languages. BIaaS 2.0 aims to allow automation, enhanced security, advanced risk modelling and improved collaboration between processes in BIaaS 1.0. The benefits from adopting BIaaS 1.0 and developing BIaaS 2.0 are illustrated using a case study from the University of Southampton and several collaborators including IBM US. BIaaS 2.0 can work with mainstream technologies such as scientific workflows, and the proposal and demonstration of BIaaS 2.0 will be aimed to certainly benefit industry and academia. © 2011 IEEE

    Towards Business Integration as a Service 2.0

    No full text
    Cloud Computing Business Framework (CCBF) is a framework for designing and implementation of Could Computing solutions. This proposal focuses on how CCBF can help to address linkage in Cloud Computing implementations. This leads to the development of Business Integration as a Service 1.0 (BIaS 1.0) allowing different services, roles and functionalities to work together in a linkage-oriented framework where the outcome of one service can be input to another, without the need to translate between domains or languages. BIaS 2.0 aims to allow full automation, enhanced security, advanced risk modelling and improved collaboration between processes in BIaaS 1.0. The benefits from adopting BIaS 1.0 and developing BIaS 2.0 are illustrated using a case study from the University of Southampton and several collaborators including IBM US. BIaS 2.0 can work with mainstream technologies such as scientific workflows, and the proposal and demonstration of BIaaS 2.0 will certainly benefit industry and academia

    DQN dynamic pricing and revenue driven service federation strategy

    Get PDF
    This paper proposes a dynamic pricing and revenue-driven service federation strategy based on a Deep Q-Network (DQN) to instantly and automatically decide federation across different service provider domains, each introduces dynamic service prices offering to its customers and towards other domains. A dynamic pricing model is considered in this work based on the analysis of real pricing data collected from public cloud provider, and upon this a dynamic arrival process as a result of the price changes is proposed for formulating the service federation problem as a Markov Decision Problem (MDP). In this work, several reinforcement learning algorithms are developed to solve the problem, and the presented results show that the DQN method reached 90% of the optimal revenue and outperformed existing state-of-the-art strategies, and it can learn the federation pricing dynamics to make optimum federation decisions according to price changes

    Resource Management In Cloud And Big Data Systems

    Get PDF
    Cloud computing is a paradigm shift in computing, where services are offered and acquired on demand in a cost-effective way. These services are often virtualized, and they can handle the computing needs of big data analytics. The ever-growing demand for cloud services arises in many areas including healthcare, transportation, energy systems, and manufacturing. However, cloud resources such as computing power, storage, energy, dollars for infrastructure, and dollars for operations, are limited. Effective use of the existing resources raises several fundamental challenges that place the cloud resource management at the heart of the cloud providers\u27 decision-making process. One of these challenges faced by the cloud providers is to provision, allocate, and price the resources such that their profit is maximized and the resources are utilized efficiently. In addition, executing large-scale applications in clouds may require resources from several cloud providers. Another challenge when processing data intensive applications is minimizing their energy costs. Electricity used in US data centers in 2010 accounted for about 2% of total electricity used nationwide. In addition, the energy consumed by the data centers is growing at over 15% annually, and the energy costs make up about 42% of the data centers\u27 operating costs. Therefore, it is critical for the data centers to minimize their energy consumption when offering services to customers. In this Ph.D. dissertation, we address these challenges by designing, developing, and analyzing mechanisms for resource management in cloud computing systems and data centers. The goal is to allocate resources efficiently while optimizing a global performance objective of the system (e.g., maximizing revenue, maximizing social welfare, or minimizing energy). We improve the state-of-the-art in both methodologies and applications. As for methodologies, we introduce novel resource management mechanisms based on mechanism design, approximation algorithms, cooperative game theory, and hedonic games. These mechanisms can be applied in cloud virtual machine (VM) allocation and pricing, cloud federation formation, and energy-efficient computing. In this dissertation, we outline our contributions and possible directions for future research in this field

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Cloud provider capacity augmentation through automated resource bartering

    Get PDF
    © 2017 Elsevier B.V. Growing interest in Cloud Computing places a heavy workload on cloud providers which is becoming increasingly difficult for them to manage with their primary data centre infrastructures. Resource scarcity can make providers vulnerable to significant reputational damage and it often forces customers to select services from the larger, more established companies, sometimes at a higher price. Funding limitations, however, commonly prevent emerging and even established providers from making a continual investment in hardware speculatively assuming a certain level of growth in demand. As an alternative, they may opt to use the current inter-cloud resource sharing systems which mainly rely on monetary payments and thus put pressure on already stretched cash flows. To address such issues, a new multi-agent based Cloud Resource Bartering System (CRBS) is implemented in this work that fosters the management and bartering of pooled resources without requiring costly financial transactions between IAAS cloud providers. Agents in CRBS collaborate to facilitate bartering among providers which not only strengthens their trading relationships but also enables them to handle surges in demand with their primary setup. Unlike existing systems, CRBS assigns resources by considering resource urgency which comparatively improves customers’ satisfaction and the resource utilization rate by more than 50%. The evaluation results verify that our system assists providers to timely acquire the additional resources and to maintain sustainable service delivery. We conclude that the existence of such a system is economically beneficial for cloud providers and enables them to adapt to fluctuating workloads

    Optimized Contract-based Model for Resource Allocation in Federated Geo-distributed Clouds

    Get PDF
    In the era of Big Data, with data growing massively in scale and velocity, cloud computing and its pay-as-you-go modelcontinues to provide significant cost benefits and a seamless service delivery model for cloud consumers. The evolution of small-scaleand large-scale geo-distributed datacenters operated and managed by individual Cloud Service Providers (CSPs) raises newchallenges in terms of effective global resource sharing and management of autonomously-controlled individual datacenter resourcestowards a globally efficient resource allocation model. Earlier solutions for geo-distributed clouds have focused primarily on achievingglobal efficiency in resource sharing, that although tries to maximize the global resource allocation, results in significant inefficiencies inlocal resource allocation for individual datacenters and individual cloud provi ders leading to unfairness in their revenue and profitearned. In this paper, we propose a new contracts-based resource sharing model for federated geo-distributed clouds that allows CSPsto establish resource sharing contracts with individual datacentersapriorifor defined time intervals during a 24 hour time period. Based on the established contracts, individual CSPs employ a contracts cost and duration aware job scheduling and provisioning algorithm that enables jobs to complete and meet their response time requirements while achieving both global resource allocation efficiency and local fairness in the profit earned. The proposed techniques are evaluated through extensive experiments using realistic workloads generated using the SHARCNET cluster trace. The experiments demonstrate the effectiveness, scalability and resource sharing fairness of the proposed model
    • …
    corecore