796 research outputs found

    CLOUD SERVICE REVENUE MANAGEMENT

    Get PDF
    Successful Internet service offerings can only thrive if customers are satisfied with service performance. While large service providers can usually cope with fluctuations of customer visits retaining acceptable Quality of Service, small and medium-sizes enterprises face a big challenge due to limited resources in the IT infrastructure. Popular services, such as justin.tv and SmugMug, rely on external resources provided by cloud computing providers in order to satisfy their customers demands at all times. The paradigm of cloud computing refers to the delivery model of computing services as a utility in a pay-as-you-go manner. In this paper, we provide and computationally evaluate decision models and policies that can help cloud computing providers increase their revenue under the realistic assumption of scarce resources and under both informational certainty and uncertainty of customers? resource requirement predictions. Our results show that in both cases under certainty and under uncertainty applying the dynamic pricing policy significantly increases revenue while using the client classification policy substantially reduces revenue. We also show that, for all policies, the presence of uncertainty causes losses in revenue; when the client classification policy is applied, losses can even amount to more than 8%

    ERA: A Framework for Economic Resource Allocation for the Cloud

    Full text link
    Cloud computing has reached significant maturity from a systems perspective, but currently deployed solutions rely on rather basic economics mechanisms that yield suboptimal allocation of the costly hardware resources. In this paper we present Economic Resource Allocation (ERA), a complete framework for scheduling and pricing cloud resources, aimed at increasing the efficiency of cloud resources usage by allocating resources according to economic principles. The ERA architecture carefully abstracts the underlying cloud infrastructure, enabling the development of scheduling and pricing algorithms independently of the concrete lower-level cloud infrastructure and independently of its concerns. Specifically, ERA is designed as a flexible layer that can sit on top of any cloud system and interfaces with both the cloud resource manager and with the users who reserve resources to run their jobs. The jobs are scheduled based on prices that are dynamically calculated according to the predicted demand. Additionally, ERA provides a key internal API to pluggable algorithmic modules that include scheduling, pricing and demand prediction. We provide a proof-of-concept software and demonstrate the effectiveness of the architecture by testing ERA over both public and private cloud systems -- Azure Batch of Microsoft and Hadoop/YARN. A broader intent of our work is to foster collaborations between economics and system communities. To that end, we have developed a simulation platform via which economics and system experts can test their algorithmic implementations

    Competitive Bandwidth Reservation via Cloud Brokerage for Video Streaming Applications

    Get PDF
    published_or_final_versio

    Equilibrium and Learning in Queues with Advance Reservations

    Full text link
    Consider a multi-class preemptive-resume M/D/1M/D/1 queueing system that supports advance reservations (AR). In this system, strategic customers must decide whether to reserve a server in advance (thereby gaining higher priority) or avoid AR. Reserving a server in advance bears a cost. In this paper, we conduct a game-theoretic analysis of this system, characterizing the equilibrium strategies. Specifically, we show that the game has two types of equilibria. In one type, none of the customers makes reservation. In the other type, only customers that realize early enough that they will need service make reservations. We show that the types and number of equilibria depend on the parameters of the queue and on the reservation cost. Specifically, we prove that the equilibrium is unique if the server utilization is below 1/2. Otherwise, there may be multiple equilibria depending on the reservation cost. Next, we assume that the reservation cost is a fee set by the provider. In that case, we show that the revenue maximizing fee leads to a unique equilibrium if the utilization is below 2/3, but multiple equilibria if the utilization exceeds 2/3. Finally, we study a dynamic version of the game, where users learn and adapt their strategies based on observations of past actions or strategies of other users. Depending on the type of learning (i.e., action learning vs.\ strategy learning), we show that the game converges to an equilibrium in some cases, while it cycles in other cases

    TOWARDS AN EFFICIENT DECISION POLICY FOR CLOUD SERVICE PROVIDERS

    Get PDF
    Cloud service providers may face the problem of how to price infrastructure services and how this pricing may impact the resource utilization. One aspect of this problem is how Cloud service providers would decide to accept or reject requests for services when the resources for offering these services become scarce. A decision support policy called Customized Bid-Price Policy (CBPP) is proposed in this paper to decide efficiently, when a large number of services or complex services can be offered over a finite time horizon. This heuristic outperforms well-known policies, if bid prices cannot be updated frequently during incoming requests and an automated update of bid prices is required to achieve more accurate decisions. Since CBPP approximates the revenue offline before the requests occur, it has a low runtime compared to other approaches during the online phase. The performance is examined via simulation and the pre-eminence of CBPP is statistically proven

    RoadRunner: Infrastructure-less vehicular congestion control

    Get PDF
    RoadRunner is an in-vehicle app for traffic congestion control without costly roadside infrastructure, instead judiciously harnessing vehicle-to-vehicle communications, cellular connectivity, and onboard computation and sensing to enable large-scale traffic congestion control at higher penetration and finer granularity than previously possible. RoadRunner limits the number of vehicles in a congested region or road by requiring each to possess a token for entry. Tokens can circulate and be reused among multiple vehicles as vehicles move between regions. We built RoadRunner as an Android app utilizing LTE, 802.11p, and 802.11n radios, deployed it on 10 vehicles, and measured cellular access reductions of up to 84% and response time improvements of up to 80%. In a microscopic agent-based traffic simulator, RoadRunner achieved travel speed improvements of up to 7.7% over an industry-strength electronic road pricing system.Singapore-MIT Alliance for Research and TechnologyAmerican Society for Engineering Education. National Defense Science and Engineering Graduate Fellowshi

    Electric Vehicle Charging Recommendation and Enabling ICT Technologies: Recent Advances and Future Directions

    Get PDF
    The introduction of Electric Vehicles (EV) will have a significant impact on the sustainable economic development of urban city. However, compared with traditional gasoline-powered vehicles, EVs currently have limited range, which necessitates regular recharging. Considering the limited charging infrastructure currently available in most countries, infrastructure investments and Renewable Energy Sources (RES) are critical. Thus, service quality provisioning is necessary for realizing EV market. Unlike numerous previous works which investigate "charging scheduling" (referred to when/whether to charge) for EVs already been parked at home/Charging Stations (CSs), a few works focus on “charging recommendation” (refer to where/which CS to charge) for on-the-move EVs. The latter use case cannot be overlooked as it is the most important feature of EVs, especially for driving experience during journeys. On-the-move EVs will travel towards appropriate CSs for charging based on smart decision on where to charge, so as to experience a shorter waiting time for charging. The effort towards sustainable engagement of EVs has not attracted enough attention from both industrial and academia communities. Even if there have been many charging service providers available, the utilization of charging infrastructures is still in need of significant enhancement. Such a situation certainly requires the popularity of EVs towards the sustainable, green and economic market. Enabling the sustainability requires a joint contribution from each domain, e.g., how to guarantee accurate information involved in decision making, how to optimally guide EV drivers towards charging place with the least waiting time, how to schedule charging services for EVs being parked within grid capacity. Achieving this goal is of importance towards a positioning of efficient, scalable and smart ICT framework, makes it feasible to learn the whole picture of grid: - Necessary information needs to be disseminated between stakeholders CSs and EVs, e.g., expected queuing time at individual CSs. In this context, how accurate CSs condition information plays an important role on the optimality of charging recommendation. - Also, it is very time-consuming for the centralized Global Controller (GC) to achieve optimization, by seamlessly collecting data from all EVs and CSs, The complexity and computation load of this centralized solution, increases exponentially with the number of EVs. This paper summaries the recent interdisciplinary research works on EV charging recommendation along with novel ICT frameworks, with an original taxonomy on how Intelligent Transportation Systems (ITS) technologies support the EV charging use case. Future directions are also highlighted to promote the future research

    Cicada: Predictive Guarantees for Cloud Network Bandwidth

    Get PDF
    In cloud-computing systems, network-bandwidth guarantees have been shown to improve predictability of application performance and cost. Most previous work on cloud-bandwidth guarantees has assumed that cloud tenants know what bandwidth guarantees they want. However, application bandwidth demands can be complex and time-varying, and many tenants might lack sufficient information to request a bandwidth guarantee that is well-matched to their needs. A tenant's lack of accurate knowledge about its future bandwidth demands can lead to over-provisioning (and thus reduced cost-efficiency) or under-provisioning (and thus poor user experience in latency-sensitive user-facing applications). We analyze traffic traces gathered over six months from an HP Cloud Services datacenter, finding that application bandwidth consumption is both time-varying and spatially inhomogeneous. This variability makes it hard to predict requirements. To solve this problem, we develop a prediction algorithm usable by a cloud provider to suggest an appropriate bandwidth guarantee to a tenant. The key idea in the prediction algorithm is to treat a set of previously observed traffic matrices as "experts" and learn online the best weighted linear combination of these experts to make its prediction. With tenant VM placement using these predictive guarantees, we find that the inter-rack network utilization in certain datacenter topologies can be more than doubled
    corecore