1,334 research outputs found
Dynamic Resource Management in Clouds: A Probabilistic Approach
Dynamic resource management has become an active area of research in the
Cloud Computing paradigm. Cost of resources varies significantly depending on
configuration for using them. Hence efficient management of resources is of
prime interest to both Cloud Providers and Cloud Users. In this work we suggest
a probabilistic resource provisioning approach that can be exploited as the
input of a dynamic resource management scheme. Using a Video on Demand use case
to justify our claims, we propose an analytical model inspired from standard
models developed for epidemiology spreading, to represent sudden and intense
workload variations. We show that the resulting model verifies a Large
Deviation Principle that statistically characterizes extreme rare events, such
as the ones produced by "buzz/flash crowd effects" that may cause workload
overflow in the VoD context. This analysis provides valuable insight on
expectable abnormal behaviors of systems. We exploit the information obtained
using the Large Deviation Principle for the proposed Video on Demand use-case
for defining policies (Service Level Agreements). We believe these policies for
elastic resource provisioning and usage may be of some interest to all
stakeholders in the emerging context of cloud networkingComment: IEICE Transactions on Communications (2012). arXiv admin note:
substantial text overlap with arXiv:1209.515
Competition in a Pure World of Internet Telephony
From the angle of competition policy, Voice over IP looks like a panacea. It not only brings better service, but it also increases competitive pressure on former telecommunications monopolists. This paper points to the largely overlooked downside. In a pure world of Internet telephony, there would be no charge for individual calls, nor for telephony, as distinct from other services running over the uniform network. Specifically, establishing property rights for either of these would be costly, whereas these property rights were automatic and free of charge in switched telephony. Giving voice over IP providers classic telephone numbers would enhance systems competition with switched telephony. But this would make it more difficult for clients to swap providers. The anti-competitive caller pays principle would extend to IP telephony.property right, non-linear pricing, pure bundling, club good, cross-subsidisation, packet switched telephony
On the feasibility of collaborative green data center ecosystems
The increasing awareness of the impact of the IT sector on the environment, together with economic factors, have fueled many research efforts to reduce the energy expenditure of data centers. Recent work proposes to achieve additional energy savings by exploiting, in concert with customers, service workloads and to reduce data centers’ carbon footprints by adopting demand-response mechanisms between data centers and their energy providers. In this paper, we debate about the incentives that customers and data centers can have to adopt such measures and propose a new service type and pricing scheme that is economically attractive and technically realizable. Simulation results based on real measurements confirm that our scheme can achieve additional energy savings while preserving service performance and the interests of data centers and customers.Peer ReviewedPostprint (author's final draft
Proactive model to determine information technologies supporting expansion of air cargo network
Shippers and recipients expect transportation companies to provide more than just the movement of a package between points; certain information must be available to them as well, to enable forecasts and plans within the supply chain.
The transportation companies also need the information flow that undergirds a transportation grid, to support ad-hoc routing and strategic structural re-alignment of business processes.
This research delineates the information needs for an expanding air cargo network, then develops a new model of the information technologies needed to support expansion into a new country. The captured information will be used by shippers, recipients, and the transportation provider to better guide business decisions. This model will provide a method for transportation companies to balance the tradeoffs between the operating efficiencies, capital expenditures, and customer expectations of their IT systems. The output of the model is a list of technologies – optimized by cost – which meet the specific needs of internal and external customers when expanding air cargo networks into a new country
Oligopolies in private spectrum commons: analysis and regulatory implications
In an effort to make more spectrum available, recent initiatives by the FCC let mobile providers offer spot service of their licensed spectrum to secondary users, hence paving the way to dynamic secondary spectrum markets. This dissertation investigates secondary spectrum markets under different regulatory regimes by identifying profitability conditions and possible competitive outcomes in an oligopoly model. We consider pricing in a market where multiple providers compete for secondary demand.
First, we analyze the market outcomes when providers adopt a coordinated access policy, where, besides pricing, a provider can elect to apply admission control on secondary users based on the state of its network. We next consider a competition when providers implement an uncoordinated access policy (i.e., no admission control). Through our analysis, we identify profitability conditions and fundamental price thresholds, including break-even and market sharing prices. We prove that regardless of the specific form of the secondary demand function, competition under coordinated access always leads to a price war outcome. In contrast, under uncoordinated access, market sharing becomes a viable market outcome if the intervals of prices for which the providers are willing to share the market overlap.
We then turn our attention to how a network provider use carrier (spectrum) aggregation in order to lower its break-even price and gain an edge over its competition. To this end, we determine the optimal (minimum) level of carrier aggregation that a smaller provider needs. Under a quality-driven (QD) regime, we establish an efficient way of numerically calculating the optimal carrier aggregation and derive scaling laws. We extend the results to delay-related metrics and show their applications to profitable pricing in secondary spectrum markets.
Finally, we consider the problem of profitability over a spatial topology, where identifying system behavior suffers from the curse of dimensionality. Hence, we propose an approximation model that captures system behavior to the first-order and provide an expression to calculate the break-even price at each network location and provide simulation results for accuracy comparison. All of our results hold for general forms of demand, thus avoid restricting assumptions of customer preferences and the valuation of the spectrum
Recommended from our members
Resource sharing in network slicing and human-machine interactions
In this thesis we explore two novel resource allocation models. The first addresses challenges associated with dynamic sharing of network resources by multiple tenants/services via network slicing. The second focuses on a data-driven approach to the optimization of resource allocation in interactive human-machine processes. In our first thrust we investigate how to allocate shared storage, computation, and/or connectivity resources distributed amongst multiple tenants/ virtual service providers which have dynamic loads. It is expected that next generation of wireless network will be shared by an increasing number of data-intensive mobile applications (e.g., autonomous cars, IoT, interactive 360° video streaming), and tenants/service providers. A key functional requirement for such infrastructure is enabling efficient sharing of heterogeneous resource among tenants/service providers supporting spatially varying and dynamic user demands, both from the point of view of enabling the deployment and performance management to diverse service providers and/or tenants, as well as means to increase utilization and reduce CAPEX/OPEX associated with deploying possible new infrastructures. To that end, we propose a novel dynamic resource sharing policy, namely, Share Constrained Proportional Fair (SCPF), which allocates a predefined ‘share’ of a pool of (distributed) resources to each slice. We provide a characterization of the achievable performance gains over General Processor Sharing (GPS), and Static Slicing (SS), i.e., fixed allocation of resources to slices. We also characterize the associated share dimensioning problem, asking when a particular set of load profiles and QoS requirements are feasible, as well as what should be an appropriate pricing strategy. We further consider possible slice-based admission control scheme where slices engage in an underlying game to maximize their carried loads subject to performance requirements. In order to accommodate settings where one would wish to provision different types of resources which are coupled through user demands, we generalize SCPF to a more general resource allocation criterion, namely, Share Constrained Slicing (SCS), which extends traditional α—fairness criterion, by striking a balance among inter- and intra-slice fairness vs. overall efficiency. We show that SCS has several desirable properties including slice-level protection, envyfreeness, and load-driven elasticity. In practice, mobile users' dynamics could make the cost of implementing SCS high, so we also study the feasibility of using a dynamically weighted max-min fair policy as a surrogate resource allocation scheme. For a setting with stochastic loads and elastic user requirements, we model the user dynamics under SCS as a queuing network and establish the stability condition. Finally, and perhaps surprisingly, we show via extensive simulation that while SCS (and/or the surrogate weighted max-min allocation) provides inter-slice protection, they can also achieve improved job delay and/or perceived throughput, as compared to other weighted max-min based allocation schemes whose intra-slice weight allocation is not share-constrained, e.g., traditional max-min and/or discriminatory processor sharing. In our second thrust we study how to optimize resource allocation in the context of human-machine interactions. Examples of such processes could include systems aimed at assisting humans in interactive learning, workload allocation, or web-search advertising. We devise an innovative framework to enable the optimization of a reward over an interactive process in a data-driven manner. This is a challenging problem for several reasons: (1) humans' behavior is not easily modeled and may reflect biases, memory and be sensitive to sequencing, all of which should/could be inferred from data; (2) because these interactions are typically sequential and transient, inferring such complex models for human behavior is difficult; (3) furthermore, in order to collect data on human-machine interactions one must choose a machine policy which in turn may bias inferences on human behavior. In this thesis we approach the problem of jointly estimating human behavior and optimizing machine policies via Alternating Entropy-Reward Ascent (AREA) algorithm. We characterize AREA in terms of its space and time complexity and convergence. We also provide an initial validation based on synthetic data generated by an established noisy nonlinear model for human decision-makingElectrical and Computer Engineerin
Experimental Performance Evaluation of Cloud-Based Analytics-as-a-Service
An increasing number of Analytics-as-a-Service solutions has recently seen
the light, in the landscape of cloud-based services. These services allow
flexible composition of compute and storage components, that create powerful
data ingestion and processing pipelines. This work is a first attempt at an
experimental evaluation of analytic application performance executed using a
wide range of storage service configurations. We present an intuitive notion of
data locality, that we use as a proxy to rank different service compositions in
terms of expected performance. Through an empirical analysis, we dissect the
performance achieved by analytic workloads and unveil problems due to the
impedance mismatch that arise in some configurations. Our work paves the way to
a better understanding of modern cloud-based analytic services and their
performance, both for its end-users and their providers.Comment: Longer version of the paper in Submission at IEEE CLOUD'1
Platform interconnection and quality incentives
We analyze competition between two platforms with positive network externalities. Platforms can choose to interconnect or alternatively, operate exclusively. We examine how this decision will affect pricing behaviour and incentives to invest in Platform quality. We find that interconnection is aa means to reduce externalities one side exerts on the other. It changes the mode of competition for subscribers and resultsin higher subscription prices. Further, even though interconnection allows for quaality spillovers to the rival platform, it results in higher quality investment than the case of exclusive platforms. Coordination will facilitate collusion on the lowest quality levels possible if its provision is costly. For low quality costs it will lead to asymmetric networks. Therefore, interconnection without coordinated investment activities is welfare maximising. --Two-sided markets,interconnection,investment in transaction quality
- …