21 research outputs found

    SLA-Oriented Resource Provisioning for Cloud Computing: Challenges, Architecture, and Solutions

    Full text link
    Cloud computing systems promise to offer subscription-oriented, enterprise-quality computing services to users worldwide. With the increased demand for delivering services to a large number of users, they need to offer differentiated services to users and meet their quality expectations. Existing resource management systems in data centers are yet to support Service Level Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to realize cloud computing and utility computing. In addition, no work has been done to collectively incorporate customer-driven service management, computational risk management, and autonomic resource management into a market-based resource management system to target the rapidly changing enterprise requirements of Cloud computing. This paper presents vision, challenges, and architectural elements of SLA-oriented resource management. The proposed architecture supports integration of marketbased provisioning policies and virtualisation technologies for flexible allocation of resources to applications. The performance results obtained from our working prototype system shows the feasibility and effectiveness of SLA-based resource provisioning in Clouds.Comment: 10 pages, 7 figures, Conference Keynote Paper: 2011 IEEE International Conference on Cloud and Service Computing (CSC 2011, IEEE Press, USA), Hong Kong, China, December 12-14, 201

    Advance reservation games

    Full text link
    Advance reservation (AR) services form a pillar of several branches of the economy, including transportation, lodging, dining, and, more recently, cloud computing. In this work, we use game theory to analyze a slotted AR system in which customers differ in their lead times. For each given time slot, the number of customers requesting service is a random variable following a general probability distribution. Based on statistical information, the customers decide whether or not to make an advance reservation of server resources in future slots for a fee. We prove that only two types of equilibria are possible: either none of the customers makes AR or only customers with lead time greater than some threshold make AR. Our analysis further shows that the fee that maximizes the provider’s profit may lead to other equilibria, one of which yields zero profit. In order to prevent ending up with no profit, the provider can elect to advertise a lower fee yielding a guaranteed but smaller profit. We refer to the ratio of the maximum possible profit to the maximum guaranteed profit as the price of conservatism. When the number of customers is a Poisson random variable, we prove that the price of conservatism is one in the single-server case, but can be arbitrarily high in a many-server system.CNS-1117160 - National Science Foundationhttp://people.bu.edu/staro/ACM_ToMPECS_AR.pdfAccepted manuscrip

    Characterizing cloud federation for enhancing providers' profit

    Get PDF
    Cloud federation has been proposed as a new paradigm that allows providers to avoid the limitation of owning only a restricted amount of resources, which forces them to reject new customers when they have not enough local resources to fulfill their customers’ requirements. Federation allows a provider to dynamically outsource resources to other providers in response to demand variations. It also allows a provider that has underused resources to rent part of them to other providers. Both things could make the provider to get more profit when used adequately. This requires that the provider has a clear understanding of the potential of each federation decision, in order to choose the most convenient depending on the environment conditions. In this paper, we present a complete characterization of providers’ federation in the Cloud, including decision equations to outsource resources to other providers, rent free resources to other providers (i.e. insourcing), or shutdown unused nodes to save power, and we characterize these decisions as a function of several parameters. Then, we demonstrate in the evaluation section how a provider can enhance its profit by using these equations to exploit federation, and how the different parameters influence which is the best decision on each situation.Peer ReviewedPostprint (published version

    G-QoSM: Grid Service Discovery Using QoS Properties

    Get PDF
    We extend the service abstraction in the Open Grid Services Architecture citeogsa for Quality of Service (QoS) properties. The realization of QoS often requires mechanisms such as advance or on-demand reservation of resources, varying in type and implementation, and independently controlled and monitored. Foster et al. propose the GARA citeFostKessl99 architecture. The GARA library provides a restricted representation scheme for encoding resource properties and the associated monitoring of Service Level Agreements (SLAs). Our focus is on the application layer, whereby a given service may indicate the QoS properties it can offer, or where a service may search for other services based on particular QoS properties

    A Game-Theoretic Based QoS-Aware Capacity Management for Real-Time EdgeIoT Applications

    Get PDF
    More and more real-time IoT applications such as smart cities or autonomous vehicles require big data analytics with reduced latencies. However, data streams produced from distributed sensing devices may not suffice to be processed traditionally in the remote cloud due to: (i) longer Wide Area Network (WAN) latencies and (ii) limited resources held by a single Cloud. To solve this problem, a novel Software-Defined Network (SDN) based InterCloud architecture is presented for mobile edge computing environments, known as EdgeIoT. An adaptive resource capacity management approach is proposed to employ a policy-based QoS control framework using principles in coalition games with externalities. To optimise resource capacity policy, the proposed QoS management technique solves, adaptively, a lexicographic ordering bi-criteria Coalition Structure Generation (CSG) problem. It is an onerous task to guarantee in a deterministic way that a real-time EdgeIoT application satisfies low latency requirement specified in Service Level Agreements (SLA). CloudSim 4.0 toolkit is used to simulate an SDN-based InterCloud scenario, and the empirical results suggest that the proposed approach can adapt, from an operational perspective, to ensure low latency QoS for real-time EdgeIoT application instances

    Towards a proper service placement in combined Fog-to-Cloud (F2C) architectures

    Get PDF
    The Internet of Things (IoT) has empowered the development of a plethora of new services, fueled by the deployment of devices located at the edge, providing multiple capabilities in terms of connectivity as well as in data collection and processing. With the inception of the Fog Computing paradigm, aimed at diminishing the distance between edge-devices and the IT premises running IoT services, the perceived service latency and even the security risks can be reduced, while simultaneously optimizing the network usage. When put together, Fog and Cloud computing (recently coined as fog-to-cloud, F2C) can be used to maximize the advantages of future computer systems, with the whole greater than the sum of individual parts. However, the specifics associated with cloud and fog resource models require new strategies to manage the mapping of novel IoT services into the suitable resources. Despite few proposals for service offloading between fog and cloud systems are slowly gaining momentum in the research community, many issues in service placement, both when the service is ready to be executed admitted as well as when the service is offloaded from Cloud to Fog, and vice-versa, are new and largely unsolved. In this paper, we provide some insights into the relevant features about service placement in F2C scenarios, highlighting main challenges in current systems towards the deployment of the next-generation IoT servicesPostprint (author's final draft

    Pricing the Cloud: An Auction Approach

    Get PDF
    Cloud computing has changed the processing and service modes of information communication technology and has affected the transformation, upgrading and innovation of the IT-related industry systems. The rapid development of cloud computing in business practice has spawned a whole new field of interdisciplinary, providing opportunities and challenges for business management research. One of the critical factors impacting cloud computing is how to price cloud services. An appropriate pricing strategy has important practical means to stakeholders, especially to providers and customers. This study addressed and discussed research findings on cloud computing pricing strategies, such as fixed pricing, bidding pricing, and dynamic pricing. Another key factor for cloud computing is Quality of Service (QoS), such as availability, reliability, latency, security, throughput, capacity, scalability, elasticity, etc. Cloud providers seek to improve QoS to attract more potential customers; while, customers intend to find QoS matching services that do not exceed their budget constraints. Based on the existing study, a hybrid QoS-based pricing mechanism, which consists of subscription and dynamic auction design, is proposed and illustrated to cloud services. The results indicate that our hybrid pricing mechanism has potential to better allocate available cloud resources, aiming at increasing revenues for providers and reducing expenses for customers in practice

    A Review on Fog Computing Systems

    Get PDF
    The current decade has witnessed a wide deployment of Internet of Things (IoT) technology in various application domains, and its pervasive role will continue to strengthen in the future. For dealing with a vast number of connected devices and the big data generated by them, an efficient computing platform is required. Fog computing has been proposed as a solution. It is a paradigm extending cloud computing and services to the edge of the network, thus reducing the latency of dynamic decision making and improving real-time performance in general. This paper provides a view on the current state-of-the-art research in the area of fog computing and internet of things (IoT) technology. </p

    Edge Computing for Extreme Reliability and Scalability

    Get PDF
    The massive number of Internet of Things (IoT) devices and their continuous data collection will lead to a rapid increase in the scale of collected data. Processing all these collected data at the central cloud server is inefficient, and even is unfeasible or unnecessary. Hence, the task of processing the data is pushed to the network edges introducing the concept of Edge Computing. Processing the information closer to the source of data (e.g., on gateways and on edge micro-servers) not only reduces the huge workload of central cloud, also decreases the latency for real-time applications by avoiding the unreliable and unpredictable network latency to communicate with the central cloud
    corecore