832 research outputs found

    Bin packing algorithms for virtual machine placement in cloud computing: a review

    Get PDF
    Cloud computing has become more commercial and familiar. The Cloud data centers havhuge challenges to maintain QoS and keep the Cloud performance high. The placing of virtual machines among physical machines in Cloud is signiïŹcant in optimizing Cloud performance. Bin packing based algorithms are most used concept to achieve virtual machine placement(VMP). This paper presents a rigorous survey and comparisons of the bin packing based VMP methods for the Cloud computing environment. Various methods are discussed and the VM placement factors in each methods are analyzed to understand the advantages and drawbacks of each method. The scope of future research and studies are also highlighted

    TRACTOR: Traffic‐aware and power‐efficient virtual machine placement in edge‐cloud data centers using artificial bee colony optimization

    Get PDF
    Technology providers heavily exploit the usage of edge‐cloud data centers (ECDCs) to meet user demand while the ECDCs are large energy consumers. Concerning the decrease of the energy expenditure of ECDCs, task placement is one of the most prominent solutions for effective allocation and consolidation of such tasks onto physical machine (PM). Such allocation must also consider additional optimizations beyond power and must include other objectives, including network‐traffic effectiveness. In this study, we present a multi‐objective virtual machine (VM) placement scheme (considering VMs as fog tasks) for ECDCs called TRACTOR, which utilizes an artificial bee colony optimization algorithm for power and network‐aware assignment of VMs onto PMs. The proposed scheme aims to minimize the network traffic of the interacting VMs and the power dissipation of the data center's switches and PMs. To evaluate the proposed VM placement solution, the Virtual Layer 2 (VL2) and three‐tier network topologies are modeled and integrated into the CloudSim toolkit to justify the effectiveness of the proposed solution in mitigating the network traffic and power consumption of the ECDC. Results indicate that our proposed method is able to reduce power energy consumption by 3.5% while decreasing network traffic and power by 15% and 30%, respectively, without affecting other QoS parameters

    Strategic and operational services for workload management in the cloud

    Full text link
    In hosting environments such as Infrastructure as a Service (IaaS) clouds, desirable application performance is typically guaranteed through the use of Service Level Agreements (SLAs), which specify minimal fractions of resource capacities that must be allocated by a service provider for unencumbered use by customers to ensure proper operation of their workloads. Most IaaS offerings are presented to customers as fixed-size and fixed-price SLAs, that do not match well the needs of specific applications. Furthermore, arbitrary colocation of applications with different SLAs may result in inefficient utilization of hosts' resources, resulting in economically undesirable customer behavior. In this thesis, we propose the design and architecture of a Colocation as a Service (CaaS) framework: a set of strategic and operational services that allow the efficient colocation of customer workloads. CaaS strategic services provide customers the means to specify their application workload using an SLA language that provides them the opportunity and incentive to take advantage of any tolerances they may have regarding the scheduling of their workloads. CaaS operational services provide the information necessary for, and carry out the reconfigurations mandated by strategic services. We recognize that it could be the case that there are multiple, yet functionally equivalent ways to express an SLA. Thus, towards that end, we present a service that allows the provably-safe transformation of SLAs from one form to another for the purpose of achieving more efficient colocation. Our CaaS framework could be incorporated into an IaaS offering by providers or it could be implemented as a value added proposition by IaaS resellers. To establish the practicality of such offerings, we present a prototype implementation of our proposed CaaS framework

    Planning and Optimization During the Life-Cycle of Service Level Agreements for Cloud Computing

    Get PDF
    Ein Service Level Agreement (SLA) ist ein elektronischer Vertrag zwischen dem Kunden und dem Anbieter eines Services. Die beteiligten Partner kl aren ihre Erwartungen und Verp ichtungen in Bezug auf den Dienst und dessen Qualit at. SLAs werden bereits f ur die Beschreibung von Cloud-Computing-Diensten eingesetzt. Der Diensteanbieter stellt sicher, dass die Dienstqualit at erf ullt wird und mit den Anforderungen des Kunden bis zum Ende der vereinbarten Laufzeit ubereinstimmt. Die Durchf uhrung der SLAs erfordert einen erheblichen Aufwand, um Autonomie, Wirtschaftlichkeit und E zienz zu erreichen. Der gegenw artige Stand der Technik im SLA-Management begegnet Herausforderungen wie SLA-Darstellung f ur Cloud- Dienste, gesch aftsbezogene SLA-Optimierungen, Dienste-Outsourcing und Ressourcenmanagement. Diese Gebiete scha en zentrale und aktuelle Forschungsthemen. Das Management von SLAs in unterschiedlichen Phasen w ahrend ihrer Laufzeit erfordert eine daf ur entwickelte Methodik. Dadurch wird die Realisierung von Cloud SLAManagement vereinfacht. Ich pr asentiere ein breit gef achertes Modell im SLA-Laufzeitmanagement, das die genannten Herausforderungen adressiert. Diese Herangehensweise erm oglicht eine automatische Dienstemodellierung, sowie Aushandlung, Bereitstellung und Monitoring von SLAs. W ahrend der Erstellungsphase skizziere ich, wie die Modellierungsstrukturen verbessert und vereinfacht werden k onnen. Ein weiteres Ziel von meinem Ansatz ist die Minimierung von Implementierungs- und Outsourcingkosten zugunsten von Wettbewerbsf ahigkeit. In der SLA-Monitoringphase entwickle ich Strategien f ur die Auswahl und Zuweisung von virtuellen Cloud Ressourcen in Migrationsphasen. Anschlie end pr ufe ich mittels Monitoring eine gr o ere Zusammenstellung von SLAs, ob die vereinbarten Fehlertoleranzen eingehalten werden. Die vorliegende Arbeit leistet einen Beitrag zu einem Entwurf der GWDG und deren wissenschaftlichen Communities. Die Forschung, die zu dieser Doktorarbeit gef uhrt hat, wurde als Teil von dem SLA@SOI EU/FP7 integriertem Projekt durchgef uhrt (contract No. 216556)

    A Survey on Resource Allocation Techniques in Cloud Computing

    Get PDF
    Cloud is an important and emerging technology utilized by various fields for storing, processing and retrieving of data anywhere and anytime without any interruption. Cloud is now acting as a platform for many companies for storing and other computational purposes to reduce infrastructure and maintenance cost similarly they can utilize their application widely based on pay per use. To make available of data to all cloud users Resource Allocation (RA) is mandatory process. In cloud hardware, software and platform are the resources utilized to satisfy user needs hence sharing these resources according to users need is a difficult task. Cloud service provider and cloud service consumer plays the major role in RA. The parameters under resource allocation, its issues and challenges are needed to be analyzed deeply before implementing any optimizing approach in RA. Hence in this work various resource allocation methods have been studied and issues in it is analyzed and presented as a survey. This work is useful for both cloud users and researchers in overcoming the challenges faced in RA

    Dynamic Resource Scheduling in Cloud Data Center

    Get PDF
    Cloud infrastructure provides a wide range of resources and services to companies and organizations, such as computation, storage, database, platforms, etc. These resources and services are used to power up and scale out tenants' workloads and meet their specified service level agreements (SLA). With the various kinds and characteristics of its workloads, an important problem for cloud provider is how to allocate it resource among the requests. An efficient resource scheduling scheme should be able to benefit both the cloud provider and also the cloud users. For the cloud provider, the goal of the scheduling algorithm is to improve the throughput and the job completion rate of the cloud data center under the stress condition or to use less physical machines to support all incoming jobs under the overprovisioning condition. For the cloud users, the goal of scheduling algorithm is to guarantee the SLAs and satisfy other job specified requirements. Furthermore, since in a cloud data center, jobs would arrive and leave very frequently, hence, it is critical to make the scheduling decision within a reasonable time. To improve the efficiency of the cloud provider, the scheduling algorithm needs to jointly reduce the inter-VM and intra-VM fragments, which means to consider the scheduling problem with regard to both the cloud provider and the users. This thesis address the cloud scheduling problem from both the cloud provider and the user side. Cloud data centers typically require tenants to specify the resource demands for the virtual machines (VMs) they create using a set of pre-defined, fixed configurations, to ease the resource allocation problem. However, this approach could lead to low resource utilization of cloud data centers as tenants are obligated to conservatively predict the maximum resource demand of their applications. In addition to that, users are at an inferior position of estimating the VM demands without knowing the multiplexing techniques of the cloud provider. Cloud provider, on the other hand, has a better knowledge at selecting the VM sets for the submitted applications. The scheduling problem is even severe for the mobile user who wants to use the cloud infrastructure to extend his/her computation and battery capacity, where the response and scheduling time is tight and the transmission channel between mobile users and cloudlet is highly variable. This thesis investigates into the resource scheduling problem for both wired and mobile users in the cloud environment. The proposed resource allocation problem is studied in the methodology of problem modeling, trace analysis, algorithm design and simulation approach. The first aspect this thesis addresses is the VM scheduling problem. Instead of the static VM scheduling, this thesis proposes a finer-grained dynamic resource allocation and scheduling algorithm that can substantially improve the utilization of the data center resources by increasing the number of jobs accommodated and correspondingly, the cloud data center provider's revenue. The second problem this thesis addresses is joint VM set selection and scheduling problem. The basic idea is that there may exist multiple VM sets that can support an application's resource demand, and by elaborately select an appropriate VM set, the utilization of the data center can be improved without violating the application's SLA. The third problem addressed by the thesis is the mobile cloud resource scheduling problem, where the key issue is to find the most energy and time efficient way of allocating components of the target application given the current network condition and cloud resource usage status. The main contribution of this thesis are the followings. For the dynamic real-time scheduling problem, a constraint programming solution is proposed to schedule the long jobs, and simple heuristics are used to quickly, yet quite accurately schedule the short jobs. Trace-driven simulations shows that the overall revenue for the cloud provider can be improved by 30\% over the traditional static VM resource allocation based on the coarse granularity specifications. For the joint VM selection and scheduling problem, this thesis proposes an optimal online VM set selection scheme that satisfies the user resource demand and minimizes the number of activated physical machines. Trace driven simulation shows around 18\% improvement of the overall utility of the provider compared to Bazaar-I approach and more than 25\% compared to best-fit and first-fit. For the mobile cloud scheduling problem, a reservation-based joint code partition and resource scheduling algorithm is proposed by conservatively estimating the minimal resource demand and a polynomial time code partition algorithm is proposed to obtain the corresponding partition

    MT-EA4Cloud: A Methodology For testing and optimising energy-aware cloud systems

    Full text link
    Currently, using conventional techniques for checking and optimising the energy consumption in cloud systems is unpractical, due to the massive computational resources required. An appropriate test suite focusing on the parts of the cloud to be tested must be efficiently synthesised and executed, while the correctness of the test results must be checked. Additionally, alternative cloud configurations that optimise the energetic consumption of the cloud must be generated and analysed accordingly, which is challenging. To solve these issues we present MT-EA4Cloud, a formal approach to check the correctness – from an energy-aware point of view – of cloud systems and optimise their energy consumption. To make the checking of energy consumption practical, MT-EA4Cloud combines metamorphic testing, evolutionary algorithms and simulation. Metamorphic testing allows to formally model the underlying cloud infrastructure in the form of metamorphic relations. We use metamorphic testing to alleviate both the reliable test set problem, generating appropriate test suites focused on the features reflected in the metamorphic relations, and the oracle problem, using the metamorphic relations to check the generated results automatically. MT-EA4Cloud uses evolutionary algorithms to efficiently guide the search for optimising the energetic consumption of cloud systems, which can be calculated using different cloud simulatorsThis work was supported by the Spanish MINECO/FEDER projects DArDOS, FAME and MASSIVE under Grants TIN2015-65845-C3-1-R, RTI2018-093608-B-C31 and RTI2018-095255- B-I00, and the Comunidad de Madrid project FORTE-CM under grant S2018/TCS-4314. The first author is also supported by the Universidad Complutense de Madrid Santander Universidades grant (CT17/17-CT18/17
    • 

    corecore