19 research outputs found

    Resource Management In Cloud And Big Data Systems

    Get PDF
    Cloud computing is a paradigm shift in computing, where services are offered and acquired on demand in a cost-effective way. These services are often virtualized, and they can handle the computing needs of big data analytics. The ever-growing demand for cloud services arises in many areas including healthcare, transportation, energy systems, and manufacturing. However, cloud resources such as computing power, storage, energy, dollars for infrastructure, and dollars for operations, are limited. Effective use of the existing resources raises several fundamental challenges that place the cloud resource management at the heart of the cloud providers\u27 decision-making process. One of these challenges faced by the cloud providers is to provision, allocate, and price the resources such that their profit is maximized and the resources are utilized efficiently. In addition, executing large-scale applications in clouds may require resources from several cloud providers. Another challenge when processing data intensive applications is minimizing their energy costs. Electricity used in US data centers in 2010 accounted for about 2% of total electricity used nationwide. In addition, the energy consumed by the data centers is growing at over 15% annually, and the energy costs make up about 42% of the data centers\u27 operating costs. Therefore, it is critical for the data centers to minimize their energy consumption when offering services to customers. In this Ph.D. dissertation, we address these challenges by designing, developing, and analyzing mechanisms for resource management in cloud computing systems and data centers. The goal is to allocate resources efficiently while optimizing a global performance objective of the system (e.g., maximizing revenue, maximizing social welfare, or minimizing energy). We improve the state-of-the-art in both methodologies and applications. As for methodologies, we introduce novel resource management mechanisms based on mechanism design, approximation algorithms, cooperative game theory, and hedonic games. These mechanisms can be applied in cloud virtual machine (VM) allocation and pricing, cloud federation formation, and energy-efficient computing. In this dissertation, we outline our contributions and possible directions for future research in this field

    Resource Management In Cloud And Big Data Systems

    Get PDF
    Cloud computing is a paradigm shift in computing, where services are offered and acquired on demand in a cost-effective way. These services are often virtualized, and they can handle the computing needs of big data analytics. The ever-growing demand for cloud services arises in many areas including healthcare, transportation, energy systems, and manufacturing. However, cloud resources such as computing power, storage, energy, dollars for infrastructure, and dollars for operations, are limited. Effective use of the existing resources raises several fundamental challenges that place the cloud resource management at the heart of the cloud providers\u27 decision-making process. One of these challenges faced by the cloud providers is to provision, allocate, and price the resources such that their profit is maximized and the resources are utilized efficiently. In addition, executing large-scale applications in clouds may require resources from several cloud providers. Another challenge when processing data intensive applications is minimizing their energy costs. Electricity used in US data centers in 2010 accounted for about 2% of total electricity used nationwide. In addition, the energy consumed by the data centers is growing at over 15% annually, and the energy costs make up about 42% of the data centers\u27 operating costs. Therefore, it is critical for the data centers to minimize their energy consumption when offering services to customers. In this Ph.D. dissertation, we address these challenges by designing, developing, and analyzing mechanisms for resource management in cloud computing systems and data centers. The goal is to allocate resources efficiently while optimizing a global performance objective of the system (e.g., maximizing revenue, maximizing social welfare, or minimizing energy). We improve the state-of-the-art in both methodologies and applications. As for methodologies, we introduce novel resource management mechanisms based on mechanism design, approximation algorithms, cooperative game theory, and hedonic games. These mechanisms can be applied in cloud virtual machine (VM) allocation and pricing, cloud federation formation, and energy-efficient computing. In this dissertation, we outline our contributions and possible directions for future research in this field

    HSO: A Hybrid Swarm Optimization Algorithm for Re-Ducing Energy Consumption in the Cloudlets

    Get PDF
    Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made

    HSO: A hybrid swarm optimization algorithm for reducing energy consumption in the cloudlets

    Get PDF
    Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made

    Energy-Efficient Softwarized Networks: A Survey

    Full text link
    With the dynamic demands and stringent requirements of various applications, networks need to be high-performance, scalable, and adaptive to changes. Researchers and industries view network softwarization as the best enabler for the evolution of networking to tackle current and prospective challenges. Network softwarization must provide programmability and flexibility to network infrastructures and allow agile management, along with higher control for operators. While satisfying the demands and requirements of network services, energy cannot be overlooked, considering the effects on the sustainability of the environment and business. This paper discusses energy efficiency in modern and future networks with three network softwarization technologies: SDN, NFV, and NS, introduced in an energy-oriented context. With that framework in mind, we review the literature based on network scenarios, control/MANO layers, and energy-efficiency strategies. Following that, we compare the references regarding approach, evaluation method, criterion, and metric attributes to demonstrate the state-of-the-art. Last, we analyze the classified literature, summarize lessons learned, and present ten essential concerns to open discussions about future research opportunities on energy-efficient softwarized networks.Comment: Accepted draft for publication in TNSM with minor updates and editin

    Resource management in the cloud: An end-to-end Approach

    Get PDF
    Philosophiae Doctor - PhDCloud Computing enables users achieve ubiquitous on-demand , and convenient access to a variety of shared computing resources, such as serves network, storage ,applications and more. As a business model, Cloud Computing has been openly welcomed by users and has become one of the research hotspots in the field of information and communication technology. This is because it provides users with on-demand customization and pay-per-use resource acquisition methods

    Strategic and operational services for workload management in the cloud

    Full text link
    In hosting environments such as Infrastructure as a Service (IaaS) clouds, desirable application performance is typically guaranteed through the use of Service Level Agreements (SLAs), which specify minimal fractions of resource capacities that must be allocated by a service provider for unencumbered use by customers to ensure proper operation of their workloads. Most IaaS offerings are presented to customers as fixed-size and fixed-price SLAs, that do not match well the needs of specific applications. Furthermore, arbitrary colocation of applications with different SLAs may result in inefficient utilization of hosts' resources, resulting in economically undesirable customer behavior. In this thesis, we propose the design and architecture of a Colocation as a Service (CaaS) framework: a set of strategic and operational services that allow the efficient colocation of customer workloads. CaaS strategic services provide customers the means to specify their application workload using an SLA language that provides them the opportunity and incentive to take advantage of any tolerances they may have regarding the scheduling of their workloads. CaaS operational services provide the information necessary for, and carry out the reconfigurations mandated by strategic services. We recognize that it could be the case that there are multiple, yet functionally equivalent ways to express an SLA. Thus, towards that end, we present a service that allows the provably-safe transformation of SLAs from one form to another for the purpose of achieving more efficient colocation. Our CaaS framework could be incorporated into an IaaS offering by providers or it could be implemented as a value added proposition by IaaS resellers. To establish the practicality of such offerings, we present a prototype implementation of our proposed CaaS framework

    Towards Mobile Edge Computing: Taxonomy, Challenges, Applications and Future Realms

    Get PDF
    The realm of cloud computing has revolutionized access to cloud resources and their utilization and applications over the Internet. However, deploying cloud computing for delay critical applications and reducing the delay in access to the resources are challenging. The Mobile Edge Computing (MEC) paradigm is one of the effective solutions, which brings the cloud computing services to the proximity of the edge network and leverages the available resources. This paper presents a survey of the latest and state-of-the-art algorithms, techniques, and concepts of MEC. The proposed work is unique in that the most novel algorithms are considered, which are not considered by the existing surveys. Moreover, the chosen novel literature of the existing researchers is classified in terms of performance metrics by describing the realms of promising performance and the regions where the margin of improvement exists for future investigation for the future researchers. This also eases the choice of a particular algorithm for a particular application. As compared to the existing surveys, the bibliometric overview is provided, which is further helpful for the researchers, engineers, and scientists for a thorough insight, application selection, and future consideration for improvement. In addition, applications related to the MEC platform are presented. Open research challenges, future directions, and lessons learned in area of the MEC are provided for further future investigation

    Energy-efficient resource allocation in limited fronthaul capacity cloud-radio access networks

    Get PDF
    In recent years, cloud radio access networks (C-RANs) have demonstrated their role as a formidable technology candidate to address the challenging issues from the advent of Fifth Generation (5G) mobile networks. In C-RANs, the modules which are capable of processing data and handling radio signals are physically separated in two main functional groups: the baseband unit (BBU) pool consisting of multiple BBUs on the cloud, and the radio access networks (RANs) consisting of several low-power remote radio heads (RRH) whose functionality are simplified with radio transmission/reception. Thanks to the centralized computation capability of cloud computing, C-RANs enable the coordination between RRHs to significantly improve the achievable spectral efficiency to satisfy the explosive traffic demand from users. More importantly, this enhanced performance can be attained at its power-saving mode, which results in the energy-efficient C-RAN perspective. Note that such improvement can be achieved under an ideal fronthaul condition of very high and stable capacity. However, in practice, dedicated fronthaul links must remarkably be divided to connect a large amount of RRHs to the cloud, leading to a scenario of non-ideal limited fronthaul capacity for each RRH. This imposes a certain upper-bound on each user’s spectral efficiency, which limits the promising achievement of C-RANs. To fully harness the energy-efficient C-RANs while respecting their stringent limited fronthaul capacity characteristics, a more appropriate and efficient network design is essential. The main scope of this thesis aims at optimizing the green performance of C-RANs in terms of energy-efficiency under the non-ideal fronthaul capacity condition, namely energy-efficient design in limited fronthaul capacity C-RANs. Our study, via jointly determining the transmit beamforming, RRH selection, and RRH–user association, targets the following three vital design issues: the optimal trade-off between maximizing achievable sum rate and minimizing total power consumption, the maximum energy-efficiency under adaptive rate-dependent power model, the optimal joint energy-efficient design of virtual computing along with the radio resource allocation in virtualized C-RANs. The significant contributions and novelties of this work can be elaborated in the followings. Firstly, the joint design of transmit beamforming, RRH selection, and RRH–user association to optimize the trade-off between user sum rate maximization and total power consumption minimization in the downlink transmissions of C-RANs is presented in Chapter 3. We develop one powerful with high-complexity and two novel efficient low-complexity algorithms to respectively solve for a global optimal and high-quality sub-optimal solutions. The findings in this chapter show that the proposed algorithms, besides overcoming the burden to solve difficult non-convex problems within a polynomial time, also outperform the techniques in the literature in terms of convergence and achieved network performance. Secondly, Chapter 4 proposes a novel model reflecting the dependence of consumed power on the user data rate and highlights its impact through various energy-efficiency metrics in CRANs. The dominant performance of the results form Chapter 4, compared to the conventional work without adaptive rate-dependent power model, corroborates the importance of the newly proposed model in appropriately conserving the system power to achieve the most energy efficient C-RAN performance. Finally, we propose a novel model on the cloud center which enables the virtualization and adaptive allocation of computing resources according to the data traffic demand to conserve more power in Chapter 5. A problem of jointly designing the virtual computing resource together with the beamforming, RRH selection, and RRH–user association which maximizes the virtualized C-RAN energy-efficiency is considered. To cope with the huge size of the formulated optimization problem, a novel efficient with much lower-complexity algorithm compared to previous work is developed to achieve the solution. The achieved results from different evaluations demonstrate the superiority of the proposed designs compared to the conventional work

    Strategic and operational services for workload management in the cloud (PhD thesis)

    Full text link
    In hosting environments such as Infrastructure as a Service (IaaS) clouds, desirable application performance is typically guaranteed through the use of Service Level Agreements (SLAs), which specify minimal fractions of resource capacities that must be allocated by a service provider for unencumbered use by customers to ensure proper operation of their workloads. Most IaaS offerings are presented to customers as fixed-size and fixed-price SLAs, that do not match well the needs of specific applications. Furthermore, arbitrary colocation of applications with different SLAs may result in inefficient utilization of hosts’ resources, resulting in economically undesirable customer behavior. In this thesis, we propose the design and architecture of a Colocation as a Service (CaaS) framework: a set of strategic and operational services that allow the efficient colocation of customer workloads. CaaS strategic services provide customers the means to specify their application workload using an SLA language that provides them the opportunity and incentive to take advantage of any tolerances they may have regarding the scheduling of their workloads. CaaS operational services provide the information necessary for, and carry out the reconfigurations mandated by strategic services. We recognize that it could be the case that there are multiple, yet functionally equivalent ways to express an SLA. Thus, towards that end, we present a service that allows the provably-safe transformation of SLAs from one form to another for the purpose of achieving more efficient colocation. Our CaaS framework could be incorporated into an IaaS offering by providers or it could be implemented as a value added proposition by IaaS resellers. To establish the practicality of such offerings, we present a prototype implementation of our proposed CaaS framework. (Major Advisor: Azer Bestavros
    corecore