173 research outputs found
Paid Peering, Settlement-Free Peering, or Both?
With the rapid growth of congestion-sensitive and data-intensive
applications, traditional settlement-free peering agreements with best-effort
delivery often do not meet the QoS requirements of content providers (CPs).
Meanwhile, Internet access providers (IAPs) feel that revenues from end-users
are not sufficient to recoup the upgrade costs of network infrastructures.
Consequently, some IAPs have begun to offer CPs a new type of peering
agreement, called paid peering, under which they provide CPs with better data
delivery quality for a fee. In this paper, we model a network platform where an
IAP makes decisions on the peering types offered to CPs and the prices charged
to CPs and end-users. We study the optimal peering schemes for the IAP, i.e.,
to offer CPs both the paid and settlement-free peering to choose from or only
one of them, as the objective is profit or welfare maximization. Our results
show that 1) the IAP should always offer the paid and settlement-free peering
under the profit-optimal and welfare-optimal schemes, respectively, 2) whether
to simultaneously offer the other peering type is largely driven by the type of
data traffic, e.g., text or video, and 3) regulators might want to encourage
the IAP to allocate more network capacity to the settlement-free peering for
increasing user welfare
On Optimal Service Differentiation in Congested Network Markets
As Internet applications have become more diverse in recent years, users
having heavy demand for online video services are more willing to pay higher
prices for better services than light users that mainly use e-mails and instant
messages. This encourages the Internet Service Providers (ISPs) to explore
service differentiations so as to optimize their profits and allocation of
network resources. Much prior work has focused on the viability of network
service differentiation by comparing with the case of a single-class service.
However, the optimal service differentiation for an ISP subject to resource
constraints has remained unsolved. In this work, we establish an optimal
control framework to derive the analytical solution to an ISP's optimal service
differentiation, i.e. the optimal service qualities and associated prices. By
analyzing the structures of the solution, we reveal how an ISP should adjust
the service qualities and prices in order to meet varying capacity constraints
and users' characteristics. We also obtain the conditions under which ISPs have
strong incentives to implement service differentiation and whether regulators
should encourage such practices
Sampling Online Social Networks via Heterogeneous Statistics
Most sampling techniques for online social networks (OSNs) are based on a
particular sampling method on a single graph, which is referred to as a
statistics. However, various realizing methods on different graphs could
possibly be used in the same OSN, and they may lead to different sampling
efficiencies, i.e., asymptotic variances. To utilize multiple statistics for
accurate measurements, we formulate a mixture sampling problem, through which
we construct a mixture unbiased estimator which minimizes asymptotic variance.
Given fixed sampling budgets for different statistics, we derive the optimal
weights to combine the individual estimators; given fixed total budget, we show
that a greedy allocation towards the most efficient statistics is optimal. In
practice, the sampling efficiencies of statistics can be quite different for
various targets and are unknown before sampling. To solve this problem, we
design a two-stage framework which adaptively spends a partial budget to test
different statistics and allocates the remaining budget to the inferred best
statistics. We show that our two-stage framework is a generalization of 1)
randomly choosing a statistics and 2) evenly allocating the total budget among
all available statistics, and our adaptive algorithm achieves higher efficiency
than these benchmark strategies in theory and experiment
Stochastic Modeling of Hybrid Cache Systems
In recent years, there is an increasing demand of big memory systems so to
perform large scale data analytics. Since DRAM memories are expensive, some
researchers are suggesting to use other memory systems such as non-volatile
memory (NVM) technology to build large-memory computing systems. However,
whether the NVM technology can be a viable alternative (either economically and
technically) to DRAM remains an open question. To answer this question, it is
important to consider how to design a memory system from a "system
perspective", that is, incorporating different performance characteristics and
price ratios from hybrid memory devices.
This paper presents an analytical model of a "hybrid page cache system" so to
understand the diverse design space and performance impact of a hybrid cache
system. We consider (1) various architectural choices, (2) design strategies,
and (3) configuration of different memory devices. Using this model, we provide
guidelines on how to design hybrid page cache to reach a good trade-off between
high system throughput (in I/O per sec or IOPS) and fast cache reactivity which
is defined by the time to fill the cache. We also show how one can configure
the DRAM capacity and NVM capacity under a fixed budget. We pick PCM as an
example for NVM and conduct numerical analysis. Our analysis indicates that
incorporating PCM in a page cache system significantly improves the system
performance, and it also shows larger benefit to allocate more PCM in page
cache in some cases. Besides, for the common setting of performance-price ratio
of PCM, "flat architecture" offers as a better choice, but "layered
architecture" outperforms if PCM write performance can be significantly
improved in the future.Comment: 14 pages; mascots 201
Joint Rate Selection and Wireless Network Coding for Time Critical Applications
In this paper, we dynamically select the transmission rate and design
wireless network coding to improve the quality of services such as delay for
time critical applications. With low transmission rate, and hence longer
transmission range, more packets may be encoded together, which increases the
coding opportunity. However, low transmission rate may incur extra transmission
delay, which is intolerable for time critical applications. We design a novel
joint rate selection and wireless network coding (RSNC) scheme with delay
constraint, so as to minimize the total number of packets that miss their
deadlines at the destination nodes. We prove that the proposed problem is
NPhard, and propose a novel graph model and transmission metric which consider
both the heterogenous transmission rates and the packet deadline constraints
during the graph construction. Using the graph model, we mathematically
formulate the problem and design an efficient algorithm to determine the
transmission rate and coding strategy for each transmission. Finally,
simulation results demonstrate the superiority of the RSNC scheme.Comment: Accepted by 2012 IEEE Wireless Communications and Networking
Conference (WCNC
- …