64 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Caching Connections in Matchings
Motivated by the desire to utilize a limited number of configurable optical
switches by recent advances in Software Defined Networks (SDNs), we define an
online problem which we call the Caching in Matchings problem. This problem has
a natural combinatorial structure and therefore may find additional
applications in theory and practice.
In the Caching in Matchings problem our cache consists of matchings of
connections between servers that form a bipartite graph. To cache a connection
we insert it into one of the matchings possibly evicting at most two other
connections from this matching. This problem resembles the problem known as
Connection Caching, where we also cache connections but our only restriction is
that they form a graph with bounded degree . Our results show a somewhat
surprising qualitative separation between the problems: The competitive ratio
of any online algorithm for caching in matchings must depend on the size of the
graph.
Specifically, we give a deterministic competitive and randomized competitive algorithms for caching in matchings, where is the
number of servers and is the number of matchings. We also show that the
competitive ratio of any deterministic algorithm is
and of any randomized algorithm is . In particular, the lower bound for
randomized algorithms is regardless of , and can be as high
as if , for example. We also show that if we
allow the algorithm to use at least matchings compared to used by
the optimum then we match the competitive ratios of connection catching which
are independent of . Interestingly, we also show that even a single extra
matching for the algorithm allows to get substantially better bounds
Middle-mile optimization for next-day delivery
We consider an e-commerce retailer operating a supply chain that consists of
middle- and last-mile transportation, and study its ability to deliver products
stored in warehouses within a day from customer's order time. Successful
next-day delivery requires inventory availability and timely truck schedules in
the middle-mile and in this paper we assume a fixed inventory position and
focus on optimizing the middle-mile. We formulate a novel optimization problem
which decides the departure of the last middle-mile truck at each (potential)
network connection in order to maximize the number of next-day deliveries. We
show that the respective \emph{next-day delivery optimization} is a
combinatorial problem that is -hard to approximate within
, hence every retailer
that offers one-day deliveries has to deal with this complexity barrier. We
study three variants of the problem motivated by operational constraints that
different retailers encounter, and propose solutions schemes tailored to each
problem's properties. To that end, we rely on greedy submodular maximization,
pipage rounding techniques, and Lagrangian heuristics. The algorithms are
scalable, offer optimality gap guarantees, and evaluated in realistic datasets
and network scenarios were found to achieve near-optimal results
An Associativity Threshold Phenomenon in Set-Associative Caches
In an -way set-associative cache, the cache is partitioned into
disjoint sets of size , and each item can only be cached in one set,
typically selected via a hash function. Set-associative caches are widely used
and have many benefits, e.g., in terms of latency or concurrency, over fully
associative caches, but they often incur more cache misses. As the set size
decreases, the benefits increase, but the paging costs worsen.
In this paper we characterize the performance of an -way
set-associative LRU cache of total size , as a function of . We prove the following, assuming that sets are selected using a
fully random hash function:
- For , the paging cost of an -way
set-associative LRU cache is within additive of that a fully-associative
LRU cache of size , with probability ,
for all request sequences of length .
- For , and for all and , the paging
cost of an -way set-associative LRU cache is not within a factor of
that a fully-associative LRU cache of size , for some request sequence of
length .
- For , if the hash function can be occasionally
changed, the paging cost of an -way set-associative LRU cache is within
a factor of that a fully-associative LRU cache of size ,
with probability , for request sequences of
arbitrary (e.g., super-polynomial) length.
Some of our results generalize to other paging algorithms besides LRU, such
as least-frequently used (LFU)
No-Regret Online Prediction with Strategic Experts
We study a generalization of the online binary prediction with expert advice
framework where at each round, the learner is allowed to pick experts
from a pool of experts and the overall utility is a modular or submodular
function of the chosen experts. We focus on the setting in which experts act
strategically and aim to maximize their influence on the algorithm's
predictions by potentially misreporting their beliefs about the events. Among
others, this setting finds applications in forecasting competitions where the
learner seeks not only to make predictions by aggregating different forecasters
but also to rank them according to their relative performance. Our goal is to
design algorithms that satisfy the following two requirements: 1)
: Incentivize the experts to report their
beliefs truthfully, and 2) : Achieve sublinear regret with
respect to the true beliefs of the best fixed set of experts in hindsight.
Prior works have studied this framework when and provided
incentive-compatible no-regret algorithms for the problem. We first show that a
simple reduction of our problem to the setting is neither efficient nor
effective. Then, we provide algorithms that utilize the specific structure of
the utility functions to achieve the two desired goals
LIPIcs, Volume 274, ESA 2023, Complete Volume
LIPIcs, Volume 274, ESA 2023, Complete Volum
LIPIcs, Volume 244, ESA 2022, Complete Volume
LIPIcs, Volume 244, ESA 2022, Complete Volum
New benchmarking techniques in resource allocation problems: theory and applications in cloud systems
Motivated by different e-commerce applications such as allocating virtual machines to servers and online ad placement, we study new models that aim to capture unstudied tensions faced by decision-makers. In online/sequential models, future information is often unavailable to decision-makers---e.g., the exact demand of a product for next week. Sometimes, these unknowns have regularity, and decision-makers can fit random models. Other times, decision-makers must be prepared for any possible outcome. In practice, several solutions are based on classical models that do not fully consider these unknowns. One reason for this is our present technical limitations. Exploring new models with adequate sources of uncertainty could be beneficial for both the theory and the practice of decision-making. For example, cloud companies such as Amazon WS face highly unpredictable demands of resources. New management planning that considers these tensions have improved capacity and cut costs for the cloud providers. As a result, cloud companies can now offer new services at lower prices benefiting thousands of users. In this thesis, we study three different models, each motivated by an application in cloud computing and online advertising.
From a technical standpoint, we apply either worst-case analysis with limited information from the system or adaptive analysis with stochastic results learned after making an irrevocable decision. A central aspect of this work is dynamic benchmarks as opposed to static or offline ones. Static and offline viewpoints are too conservative and have limited interpretation in some dynamic settings. A dynamic criterion, such as the value of an optimal sequential policy, allows comparisons with the best that one could do in dynamic scenarios. Another aspect of this work is multi-objective criteria in dynamic settings, where two or more competing goals must be satisfied under an uncertain future. We tackle the challenges introduced by these new perspectives with fresh theoretical analyses, drawing inspiration from linear and nonlinear optimization and stochastic processes.Ph.D
- …