64 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Caching Connections in Matchings

    Full text link
    Motivated by the desire to utilize a limited number of configurable optical switches by recent advances in Software Defined Networks (SDNs), we define an online problem which we call the Caching in Matchings problem. This problem has a natural combinatorial structure and therefore may find additional applications in theory and practice. In the Caching in Matchings problem our cache consists of kk matchings of connections between servers that form a bipartite graph. To cache a connection we insert it into one of the kk matchings possibly evicting at most two other connections from this matching. This problem resembles the problem known as Connection Caching, where we also cache connections but our only restriction is that they form a graph with bounded degree kk. Our results show a somewhat surprising qualitative separation between the problems: The competitive ratio of any online algorithm for caching in matchings must depend on the size of the graph. Specifically, we give a deterministic O(nk)O(nk) competitive and randomized O(nlogk)O(n \log k) competitive algorithms for caching in matchings, where nn is the number of servers and kk is the number of matchings. We also show that the competitive ratio of any deterministic algorithm is Ω(max(nk,k))\Omega(\max(\frac{n}{k},k)) and of any randomized algorithm is Ω(lognk2logklogk)\Omega(\log \frac{n}{k^2 \log k} \cdot \log k). In particular, the lower bound for randomized algorithms is Ω(logn)\Omega(\log n) regardless of kk, and can be as high as Ω(log2n)\Omega(\log^2 n) if k=n1/3k=n^{1/3}, for example. We also show that if we allow the algorithm to use at least 2k12k-1 matchings compared to kk used by the optimum then we match the competitive ratios of connection catching which are independent of nn. Interestingly, we also show that even a single extra matching for the algorithm allows to get substantially better bounds

    Online Paging with Heterogeneous Cache Slots

    Get PDF

    Middle-mile optimization for next-day delivery

    Full text link
    We consider an e-commerce retailer operating a supply chain that consists of middle- and last-mile transportation, and study its ability to deliver products stored in warehouses within a day from customer's order time. Successful next-day delivery requires inventory availability and timely truck schedules in the middle-mile and in this paper we assume a fixed inventory position and focus on optimizing the middle-mile. We formulate a novel optimization problem which decides the departure of the last middle-mile truck at each (potential) network connection in order to maximize the number of next-day deliveries. We show that the respective \emph{next-day delivery optimization} is a combinatorial problem that is NPNP-hard to approximate within (11/e)opt0.632opt(1-1/e)\cdot\texttt{opt}\approx 0.632\cdot\texttt{opt}, hence every retailer that offers one-day deliveries has to deal with this complexity barrier. We study three variants of the problem motivated by operational constraints that different retailers encounter, and propose solutions schemes tailored to each problem's properties. To that end, we rely on greedy submodular maximization, pipage rounding techniques, and Lagrangian heuristics. The algorithms are scalable, offer optimality gap guarantees, and evaluated in realistic datasets and network scenarios were found to achieve near-optimal results

    An Associativity Threshold Phenomenon in Set-Associative Caches

    Full text link
    In an α\alpha-way set-associative cache, the cache is partitioned into disjoint sets of size α\alpha, and each item can only be cached in one set, typically selected via a hash function. Set-associative caches are widely used and have many benefits, e.g., in terms of latency or concurrency, over fully associative caches, but they often incur more cache misses. As the set size α\alpha decreases, the benefits increase, but the paging costs worsen. In this paper we characterize the performance of an α\alpha-way set-associative LRU cache of total size kk, as a function of α=α(k)\alpha = \alpha(k). We prove the following, assuming that sets are selected using a fully random hash function: - For α=ω(logk)\alpha = \omega(\log k), the paging cost of an α\alpha-way set-associative LRU cache is within additive O(1)O(1) of that a fully-associative LRU cache of size (1o(1))k(1-o(1))k, with probability 11/poly(k)1 - 1/\operatorname{poly}(k), for all request sequences of length poly(k)\operatorname{poly}(k). - For α=o(logk)\alpha = o(\log k), and for all c=O(1)c = O(1) and r=O(1)r = O(1), the paging cost of an α\alpha-way set-associative LRU cache is not within a factor cc of that a fully-associative LRU cache of size k/rk/r, for some request sequence of length O(k1.01)O(k^{1.01}). - For α=ω(logk)\alpha = \omega(\log k), if the hash function can be occasionally changed, the paging cost of an α\alpha-way set-associative LRU cache is within a factor 1+o(1)1 + o(1) of that a fully-associative LRU cache of size (1o(1))k(1-o(1))k, with probability 11/poly(k)1 - 1/\operatorname{poly}(k), for request sequences of arbitrary (e.g., super-polynomial) length. Some of our results generalize to other paging algorithms besides LRU, such as least-frequently used (LFU)

    No-Regret Online Prediction with Strategic Experts

    Full text link
    We study a generalization of the online binary prediction with expert advice framework where at each round, the learner is allowed to pick m1m\geq 1 experts from a pool of KK experts and the overall utility is a modular or submodular function of the chosen experts. We focus on the setting in which experts act strategically and aim to maximize their influence on the algorithm's predictions by potentially misreporting their beliefs about the events. Among others, this setting finds applications in forecasting competitions where the learner seeks not only to make predictions by aggregating different forecasters but also to rank them according to their relative performance. Our goal is to design algorithms that satisfy the following two requirements: 1) Incentive-compatible\textit{Incentive-compatible}: Incentivize the experts to report their beliefs truthfully, and 2) No-regret\textit{No-regret}: Achieve sublinear regret with respect to the true beliefs of the best fixed set of mm experts in hindsight. Prior works have studied this framework when m=1m=1 and provided incentive-compatible no-regret algorithms for the problem. We first show that a simple reduction of our problem to the m=1m=1 setting is neither efficient nor effective. Then, we provide algorithms that utilize the specific structure of the utility functions to achieve the two desired goals

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    New benchmarking techniques in resource allocation problems: theory and applications in cloud systems

    Get PDF
    Motivated by different e-commerce applications such as allocating virtual machines to servers and online ad placement, we study new models that aim to capture unstudied tensions faced by decision-makers. In online/sequential models, future information is often unavailable to decision-makers---e.g., the exact demand of a product for next week. Sometimes, these unknowns have regularity, and decision-makers can fit random models. Other times, decision-makers must be prepared for any possible outcome. In practice, several solutions are based on classical models that do not fully consider these unknowns. One reason for this is our present technical limitations. Exploring new models with adequate sources of uncertainty could be beneficial for both the theory and the practice of decision-making. For example, cloud companies such as Amazon WS face highly unpredictable demands of resources. New management planning that considers these tensions have improved capacity and cut costs for the cloud providers. As a result, cloud companies can now offer new services at lower prices benefiting thousands of users. In this thesis, we study three different models, each motivated by an application in cloud computing and online advertising. From a technical standpoint, we apply either worst-case analysis with limited information from the system or adaptive analysis with stochastic results learned after making an irrevocable decision. A central aspect of this work is dynamic benchmarks as opposed to static or offline ones. Static and offline viewpoints are too conservative and have limited interpretation in some dynamic settings. A dynamic criterion, such as the value of an optimal sequential policy, allows comparisons with the best that one could do in dynamic scenarios. Another aspect of this work is multi-objective criteria in dynamic settings, where two or more competing goals must be satisfied under an uncertain future. We tackle the challenges introduced by these new perspectives with fresh theoretical analyses, drawing inspiration from linear and nonlinear optimization and stochastic processes.Ph.D
    corecore