66,376 research outputs found
Dynamic vs Oblivious Routing in Network Design
Consider the robust network design problem of finding a minimum cost network
with enough capacity to route all traffic demand matrices in a given polytope.
We investigate the impact of different routing models in this robust setting:
in particular, we compare \emph{oblivious} routing, where the routing between
each terminal pair must be fixed in advance, to \emph{dynamic} routing, where
routings may depend arbitrarily on the current demand. Our main result is a
construction that shows that the optimal cost of such a network based on
oblivious routing (fractional or integral) may be a factor of
\BigOmega(\log{n}) more than the cost required when using dynamic routing.
This is true even in the important special case of the asymmetric hose model.
This answers a question in \cite{chekurisurvey07}, and is tight up to constant
factors. Our proof technique builds on a connection between expander graphs and
robust design for single-sink traffic patterns \cite{ChekuriHardness07}
Cluster Before You Hallucinate: Approximating Node-Capacitated Network Design and Energy Efficient Routing
We consider circuit routing with an objective of minimizing energy, in a
network of routers that are speed scalable and that may be shutdown when idle.
We consider both multicast routing and unicast routing. It is known that this
energy minimization problem can be reduced to a capacitated flow network design
problem, where vertices have a common capacity but arbitrary costs, and the
goal is to choose a minimum cost collection of vertices whose induced subgraph
will support the specified flow requirements. For the multicast (single-sink)
capacitated design problem we give a polynomial-time algorithm that is
O(log^3n)-approximate with O(log^4 n) congestion. This translates back to a
O(log ^(4{\alpha}+3) n)-approximation for the multicast energy-minimization
routing problem, where {\alpha} is the polynomial exponent in the dynamic power
used by a router. For the unicast (multicommodity) capacitated design problem
we give a polynomial-time algorithm that is O(log^5 n)-approximate with
O(log^12 n) congestion, which translates back to a O(log^(12{\alpha}+5)
n)-approximation for the unicast energy-minimization routing problem.Comment: 22 pages (full version of STOC 2014 paper
Approximating Generalized Network Design under (Dis)economies of Scale with Applications to Energy Efficiency
In a generalized network design (GND) problem, a set of resources are
assigned to multiple communication requests. Each request contributes its
weight to the resources it uses and the total load on a resource is then
translated to the cost it incurs via a resource specific cost function. For
example, a request may be to establish a virtual circuit, thus contributing to
the load on each edge in the circuit. Motivated by energy efficiency
applications, recently, there is a growing interest in GND using cost functions
that exhibit (dis)economies of scale ((D)oS), namely, cost functions that
appear subadditive for small loads and superadditive for larger loads.
The current paper advances the existing literature on approximation
algorithms for GND problems with (D)oS cost functions in various aspects: (1)
we present a generic approximation framework that yields approximation results
for a much wider family of requests in both directed and undirected graphs; (2)
our framework allows for unrelated weights, thus providing the first
non-trivial approximation for the problem of scheduling unrelated parallel
machines with (D)oS cost functions; (3) our framework is fully combinatorial
and runs in strongly polynomial time; (4) the family of (D)oS cost functions
considered in the current paper is more general than the one considered in the
existing literature, providing a more accurate abstraction for practical energy
conservation scenarios; and (5) we obtain the first approximation ratio for GND
with (D)oS cost functions that depends only on the parameters of the resources'
technology and does not grow with the number of resources, the number of
requests, or their weights. The design of our framework relies heavily on
Roughgarden's smoothness toolbox (JACM 2015), thus demonstrating the possible
usefulness of this toolbox in the area of approximation algorithms.Comment: 39 pages, 1 figure. An extended abstract of this paper is to appear
in the 50th Annual ACM Symposium on the Theory of Computing (STOC 2018
Auctioning Bulk Mobile Messages
The search for enablers of continued growth of SMS traffic, as well asthe take-off of the more diversified MMS message contents, open up forenterprises the potential of bulk use of mobile messaging , instead ofessentially one-by-one use. In parallel, such enterprises or valueadded services needing mobile messaging in bulk - for spot use or foruse over a prescribed period of time - want to minimize totalacquisition costs, from a set of technically approved providers ofmessaging capacity.This leads naturally to the evaluation of auctioning for bulk SMS orMMS messaging capacity, with the intrinsic advantages therein such asreduction in acquisition costs, allocation efficiency, and optimality.The paper shows, with extensive results as evidence from simulationscarried out in the Rotterdam School of Management e-Auction room, howmulti-attribute reverse auctions perform for the enterprise-buyer, aswell as for the messaging capacity-sellers. We compare 1- and 5-roundauctions, to show the learning effect and the benefits thereof to thevarious parties. The sensitivity will be reported to changes in theenterprise's and the capacity providers utilities and prioritiesbetween message attributes (such as price, size, security, anddelivery delay). At the organizational level, the paper also considersalternate organizational deployment schemes and properties for anoff-line or spot bulk messaging capacity market, subject to technicaland regulatory constraints.MMS;EMS;Mobile commerce;SMS;multi-attribute auctions
A dynamic pricing model for unifying programmatic guarantee and real-time bidding in display advertising
There are two major ways of selling impressions in display advertising. They
are either sold in spot through auction mechanisms or in advance via guaranteed
contracts. The former has achieved a significant automation via real-time
bidding (RTB); however, the latter is still mainly done over the counter
through direct sales. This paper proposes a mathematical model that allocates
and prices the future impressions between real-time auctions and guaranteed
contracts. Under conventional economic assumptions, our model shows that the
two ways can be seamless combined programmatically and the publisher's revenue
can be maximized via price discrimination and optimal allocation. We consider
advertisers are risk-averse, and they would be willing to purchase guaranteed
impressions if the total costs are less than their private values. We also
consider that an advertiser's purchase behavior can be affected by both the
guaranteed price and the time interval between the purchase time and the
impression delivery date. Our solution suggests an optimal percentage of future
impressions to sell in advance and provides an explicit formula to calculate at
what prices to sell. We find that the optimal guaranteed prices are dynamic and
are non-decreasing over time. We evaluate our method with RTB datasets and find
that the model adopts different strategies in allocation and pricing according
to the level of competition. From the experiments we find that, in a less
competitive market, lower prices of the guaranteed contracts will encourage the
purchase in advance and the revenue gain is mainly contributed by the increased
competition in future RTB. In a highly competitive market, advertisers are more
willing to purchase the guaranteed contracts and thus higher prices are
expected. The revenue gain is largely contributed by the guaranteed selling.Comment: Chen, Bowei and Yuan, Shuai and Wang, Jun (2014) A dynamic pricing
model for unifying programmatic guarantee and real-time bidding in display
advertising. In: The Eighth International Workshop on Data Mining for Online
Advertising, 24 - 27 August 2014, New York Cit
Muppet: MapReduce-Style Processing of Fast Data
MapReduce has emerged as a popular method to process big data. In the past
few years, however, not just big data, but fast data has also exploded in
volume and availability. Examples of such data include sensor data streams, the
Twitter Firehose, and Facebook updates. Numerous applications must process fast
data. Can we provide a MapReduce-style framework so that developers can quickly
write such applications and execute them over a cluster of machines, to achieve
low latency and high scalability? In this paper we report on our investigation
of this question, as carried out at Kosmix and WalmartLabs. We describe
MapUpdate, a framework like MapReduce, but specifically developed for fast
data. We describe Muppet, our implementation of MapUpdate. Throughout the
description we highlight the key challenges, argue why MapReduce is not well
suited to address them, and briefly describe our current solutions. Finally, we
describe our experience and lessons learned with Muppet, which has been used
extensively at Kosmix and WalmartLabs to power a broad range of applications in
social media and e-commerce.Comment: VLDB201
IDMoB: IoT Data Marketplace on Blockchain
Today, Internet of Things (IoT) devices are the powerhouse of data generation
with their ever-increasing numbers and widespread penetration. Similarly,
artificial intelligence (AI) and machine learning (ML) solutions are getting
integrated to all kinds of services, making products significantly more
"smarter". The centerpiece of these technologies is "data". IoT device vendors
should be able keep up with the increased throughput and come up with new
business models. On the other hand, AI/ML solutions will produce better results
if training data is diverse and plentiful.
In this paper, we propose a blockchain-based, decentralized and trustless
data marketplace where IoT device vendors and AI/ML solution providers may
interact and collaborate. By facilitating a transparent data exchange platform,
access to consented data will be democratized and the variety of services
targeting end-users will increase. Proposed data marketplace is implemented as
a smart contract on Ethereum blockchain and Swarm is used as the distributed
storage platform.Comment: Presented at Crypto Valley Conference on Blockchain Technology (CVCBT
2018), 20-22 June 2018 - published version may diffe
- …