1,252 research outputs found

    Dynamic Maxflow via Dynamic Interior Point Methods

    Full text link
    In this paper we provide an algorithm for maintaining a (1ϵ)(1-\epsilon)-approximate maximum flow in a dynamic, capacitated graph undergoing edge additions. Over a sequence of mm-additions to an nn-node graph where every edge has capacity O(poly(m))O(\mathrm{poly}(m)) our algorithm runs in time O^(mnϵ1)\widehat{O}(m \sqrt{n} \cdot \epsilon^{-1}). To obtain this result we design dynamic data structures for the more general problem of detecting when the value of the minimum cost circulation in a dynamic graph undergoing edge additions obtains value at most FF (exactly) for a given threshold FF. Over a sequence mm-additions to an nn-node graph where every edge has capacity O(poly(m))O(\mathrm{poly}(m)) and cost O(poly(m))O(\mathrm{poly}(m)) we solve this thresholded minimum cost flow problem in O^(mn)\widehat{O}(m \sqrt{n}). Both of our algorithms succeed with high probability against an adaptive adversary. We obtain these results by dynamizing the recent interior point method used to obtain an almost linear time algorithm for minimum cost flow (Chen, Kyng, Liu, Peng, Probst Gutenberg, Sachdeva 2022), and introducing a new dynamic data structure for maintaining minimum ratio cycles in an undirected graph that succeeds with high probability against adaptive adversaries.Comment: 30 page

    Controlling Fairness and Bias in Dynamic Learning-to-Rank

    Full text link
    Rankings are the primary interface through which many online platforms match users to items (e.g. news, products, music, video). In these two-sided markets, not only the users draw utility from the rankings, but the rankings also determine the utility (e.g. exposure, revenue) for the item providers (e.g. publishers, sellers, artists, studios). It has already been noted that myopically optimizing utility to the users, as done by virtually all learning-to-rank algorithms, can be unfair to the item providers. We, therefore, present a learning-to-rank approach for explicitly enforcing merit-based fairness guarantees to groups of items (e.g. articles by the same publisher, tracks by the same artist). In particular, we propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data. The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility, dynamically adapting both as more data becomes available. In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.Comment: First two authors contributed equally. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval 202

    Online Multistage Subset Maximization Problems

    Get PDF
    Numerous combinatorial optimization problems (knapsack, maximum-weight matching, etc.) can be expressed as subset maximization problems: One is given a ground set N={1,...,n}, a collection F subseteq 2^N of subsets thereof such that the empty set is in F, and an objective (profit) function p: F -> R_+. The task is to choose a set S in F that maximizes p(S). We consider the multistage version (Eisenstat et al., Gupta et al., both ICALP 2014) of such problems: The profit function p_t (and possibly the set of feasible solutions F_t) may change over time. Since in many applications changing the solution is costly, the task becomes to find a sequence of solutions that optimizes the trade-off between good per-time solutions and stable solutions taking into account an additional similarity bonus. As similarity measure for two consecutive solutions, we consider either the size of the intersection of the two solutions or the difference of n and the Hamming distance between the two characteristic vectors. We study multistage subset maximization problems in the online setting, that is, p_t (along with possibly F_t) only arrive one by one and, upon such an arrival, the online algorithm has to output the corresponding solution without knowledge of the future. We develop general techniques for online multistage subset maximization and thereby characterize those models (given by the type of data evolution and the type of similarity measure) that admit a constant-competitive online algorithm. When no constant competitive ratio is possible, we employ lookahead to circumvent this issue. When a constant competitive ratio is possible, we provide almost matching lower and upper bounds on the best achievable one
    corecore