12,037 research outputs found
On the Throughput Allocation for Proportional Fairness in Multirate IEEE 802.11 DCF
This paper presents a modified proportional fairness (PF) criterion suitable
for mitigating the \textit{rate anomaly} problem of multirate IEEE 802.11
Wireless LANs employing the mandatory Distributed Coordination Function (DCF)
option. Compared to the widely adopted assumption of saturated network, the
proposed criterion can be applied to general networks whereby the contending
stations are characterized by specific packet arrival rates, , and
transmission rates .
The throughput allocation resulting from the proposed algorithm is able to
greatly increase the aggregate throughput of the DCF while ensuring fairness
levels among the stations of the same order of the ones available with the
classical PF criterion. Put simply, each station is allocated a throughput that
depends on a suitable normalization of its packet rate, which, to some extent,
measures the frequency by which the station tries to gain access to the
channel. Simulation results are presented for some sample scenarios, confirming
the effectiveness of the proposed criterion.Comment: Submitted to IEEE CCNC 200
Smart Pacing for Effective Online Ad Campaign Optimization
In targeted online advertising, advertisers look for maximizing campaign
performance under delivery constraint within budget schedule. Most of the
advertisers typically prefer to impose the delivery constraint to spend budget
smoothly over the time in order to reach a wider range of audiences and have a
sustainable impact. Since lots of impressions are traded through public
auctions for online advertising today, the liquidity makes price elasticity and
bid landscape between demand and supply change quite dynamically. Therefore, it
is challenging to perform smooth pacing control and maximize campaign
performance simultaneously. In this paper, we propose a smart pacing approach
in which the delivery pace of each campaign is learned from both offline and
online data to achieve smooth delivery and optimal performance goals. The
implementation of the proposed approach in a real DSP system is also presented.
Experimental evaluations on both real online ad campaigns and offline
simulations show that our approach can effectively improve campaign performance
and achieve delivery goals.Comment: KDD'15, August 10-13, 2015, Sydney, NSW, Australi
Joint Computation Offloading and Prioritized Scheduling in Mobile Edge Computing
With the rapid development of smart phones, enormous amounts of data are generated and usually require intensive and real-time computation. Nevertheless, quality of service (QoS) is hardly to be met due to the tension between resourcelimited (battery, CPU power) devices and computation-intensive applications. Mobileedge computing (MEC) emerging as a promising technique can be used to copy with stringent requirements from mobile applications. By offloading computationally intensive workloads to edge server and applying efficient task scheduling, energy cost of mobiles could be significantly reduced and therefore greatly improve QoS, e.g., latency. This paper proposes a joint computation offloading and prioritized task scheduling scheme in a multi-user mobile-edge computing system. We investigate an energy minimizing task offloading strategy in mobile devices and develop an effective priority-based task scheduling algorithm with edge server. The execution time, energy consumption, execution cost, and bonus score against both the task data sizes and latency requirement is adopted as the performance metric. Performance evaluation results show that, the proposed algorithm significantly reduce task completion time, edge server VM usage cost, and improve QoS in terms of bonus score. Moreover, dynamic prioritized task scheduling is also discussed herein, results show dynamic thresholds setting realizes the optimal task scheduling. We believe that this work is significant to the emerging mobile-edge computing paradigm, and can be applied to other Internet of Things (IoT)-Edge applications
FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation
One of the most popular approaches to multi-target tracking is
tracking-by-detection. Current min-cost flow algorithms which solve the data
association problem optimally have three main drawbacks: they are
computationally expensive, they assume that the whole video is given as a
batch, and they scale badly in memory and computation with the length of the
video sequence. In this paper, we address each of these issues, resulting in a
computationally and memory-bounded solution. First, we introduce a dynamic
version of the successive shortest-path algorithm which solves the data
association problem optimally while reusing computation, resulting in
significantly faster inference than standard solvers. Second, we address the
optimal solution to the data association problem when dealing with an incoming
stream of data (i.e., online setting). Finally, we present our main
contribution which is an approximate online solution with bounded memory and
computation which is capable of handling videos of arbitrarily length while
performing tracking in real time. We demonstrate the effectiveness of our
algorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art
performance, while being significantly faster than existing solvers
Scalable Parallel Numerical Constraint Solver Using Global Load Balancing
We present a scalable parallel solver for numerical constraint satisfaction
problems (NCSPs). Our parallelization scheme consists of homogeneous worker
solvers, each of which runs on an available core and communicates with others
via the global load balancing (GLB) method. The parallel solver is implemented
with X10 that provides an implementation of GLB as a library. In experiments,
several NCSPs from the literature were solved and attained up to 516-fold
speedup using 600 cores of the TSUBAME2.5 supercomputer.Comment: To be presented at X10'15 Worksho
A comparison of RESTART implementations
The RESTART method is a widely applicable simulation technique for the estimation of rare event probabilities. The method is based on the idea to restart the simulation in certain system states, in order to generate more occurrences of the rare event. One of the main questions for any RESTART implementation is how and when to restart the simulation, in order to achieve the most accurate results for a fixed simulation effort. We investigate and compare, both theoretically and empirically, different implementations of the RESTART method. We find that the original RESTART implementation, in which each path is split into a fixed number of copies, may not be the most efficient one. It is generally better to fix the total simulation effort for each stage of the simulation. Furthermore, given this effort, the best strategy is to restart an equal number of times from each state, rather than to restart each time from a randomly chosen stat
- …