105 research outputs found
Expander Decomposition in Dynamic Streams
In this paper we initiate the study of expander decompositions of a graph G = (V, E) in the streaming model of computation. The goal is to find a partitioning ? of vertices V such that the subgraphs of G induced by the clusters C ? ? are good expanders, while the number of intercluster edges is small. Expander decompositions are classically constructed by a recursively applying balanced sparse cuts to the input graph. In this paper we give the first implementation of such a recursive sparsest cut process using small space in the dynamic streaming model.
Our main algorithmic tool is a new type of cut sparsifier that we refer to as a power cut sparsifier - it preserves cuts in any given vertex induced subgraph (or, any cluster in a fixed partition of V) to within a (?, ?)-multiplicative/additive error with high probability. The power cut sparsifier uses O?(n/??) space and edges, which we show is asymptotically tight up to polylogarithmic factors in n for constant ?
Spiking Neural Networks Through the Lens of Streaming Algorithms
We initiate the study of biological neural networks from the perspective of
streaming algorithms. Like computers, human brains suffer from memory
limitations which pose a significant obstacle when processing large scale and
dynamically changing data. In computer science, these challenges are captured
by the well-known streaming model, which can be traced back to Munro and
Paterson `78 and has had significant impact in theory and beyond. In the
classical streaming setting, one must compute some function of a stream of
updates , given restricted single-pass access
to the stream. The primary complexity measure is the space used by the
algorithm.
We take the first steps towards understanding the connection between
streaming and neural algorithms. On the upper bound side, we design neural
algorithms based on known streaming algorithms for fundamental tasks, including
distinct elements, approximate median, heavy hitters, and more. The number of
neurons in our neural solutions almost matches the space bounds of the
corresponding streaming algorithms. As a general algorithmic primitive, we show
how to implement the important streaming technique of linear sketching
efficient in spiking neural networks. On the lower bound side, we give a
generic reduction, showing that any space-efficient spiking neural network can
be simulated by a space-efficiently streaming algorithm. This reduction lets us
translate streaming-space lower bounds into nearly matching neural-space lower
bounds, establishing a close connection between these two models.Comment: To appear in DISC'20, shorten abstrac
Efficient Algorithms for Certifying Lower Bounds on the Discrepancy of Random Matrices
We initiate the study of the algorithmic problem of certifying lower bounds
on the discrepancy of random matrices: given an input matrix , output a value that is a lower bound on
for every , but
is close to the typical value of with high probability over
the choice of a random . This problem is important because of its
connections to conjecturally-hard average-case problems such as
negatively-spiked PCA, the number-balancing problem and refuting random
constraint satisfaction problems. We give the first polynomial-time algorithms
with non-trivial guarantees for two main settings. First, when the entries of
are i.i.d. standard Gaussians, it is known that with high probability. Our algorithm certifies that
with high probability. As an
application, this formally refutes a conjecture of Bandeira, Kunisky, and Wein
on the computational hardness of the detection problem in the negatively-spiked
Wishart model. Second, we consider the integer partitioning problem: given
uniformly random -bit integers , certify the non-existence
of a perfect partition, i.e. certify that for . Under the scaling , it is known that the
probability of the existence of a perfect partition undergoes a phase
transition from 1 to 0 at ; our algorithm certifies the
non-existence of perfect partitions for some . We also give
efficient non-deterministic algorithms with significantly improved guarantees.
Our algorithms involve a reduction to the Shortest Vector Problem.Comment: ITCS 202
Individual Fairness in Advertising Auctions Through Inverse Proportionality
Recent empirical work demonstrates that online advertisement can exhibit bias in the delivery of ads across users even when all advertisers bid in a non-discriminatory manner. We study the design ad auctions that, given fair bids, are guaranteed to produce fair outcomes. Following the works of Dwork and Ilvento [2019] and Chawla et al. [2020], our goal is to design a truthful auction that satisfies "individual fairness" in its outcomes: informally speaking, users that are similar to each other should obtain similar allocations of ads. Within this framework we quantify the tradeoff between social welfare maximization and fairness.
This work makes two conceptual contributions. First, we express the fairness constraint as a kind of stability condition: any two users that are assigned multiplicatively similar values by all the advertisers must receive additively similar allocations for each advertiser. This value stability constraint is expressed as a function that maps the multiplicative distance between value vectors to the maximum allowable ?_{?} distance between the corresponding allocations. Standard auctions do not satisfy this kind of value stability.
Second, we introduce a new class of allocation algorithms called Inverse Proportional Allocation that achieve a near optimal tradeoff between fairness and social welfare for a broad and expressive class of value stability conditions. These allocation algorithms are truthful and prior-free, and achieve a constant factor approximation to the optimal (unconstrained) social welfare. In particular, the approximation ratio is independent of the number of advertisers in the system. In this respect, these allocation algorithms greatly surpass the guarantees achieved in previous work. We also extend our results to broader notions of fairness that we call subset fairness
Robust Algorithms Under Adversarial Injections
In this paper, we study streaming and online algorithms in the context of randomness in the input. For several problems, a random order of the input sequence - as opposed to the worst-case order - appears to be a necessary evil in order to prove satisfying guarantees. However, algorithmic techniques that work under this assumption tend to be vulnerable to even small changes in the distribution. For this reason, we propose a new adversarial injections model, in which the input is ordered randomly, but an adversary may inject misleading elements at arbitrary positions. We believe that studying algorithms under this much weaker assumption can lead to new insights and, in particular, more robust algorithms. We investigate two classical combinatorial-optimization problems in this model: Maximum matching and cardinality constrained monotone submodular function maximization. Our main technical contribution is a novel streaming algorithm for the latter that computes a 0.55-approximation. While the algorithm itself is clean and simple, an involved analysis shows that it emulates a subdivision of the input stream which can be used to greatly limit the power of the adversary
Canadian Traveller Problem with Predictions
In this work, we consider the -Canadian Traveller Problem (-CTP) under
the learning-augmented framework proposed by Lykouris & Vassilvitskii. -CTP
is a generalization of the shortest path problem, and involves a traveller who
knows the entire graph in advance and wishes to find the shortest route from a
source vertex to a destination vertex , but discovers online that some
edges (up to ) are blocked once reaching them. A potentially imperfect
predictor gives us the number and the locations of the blocked edges.
We present a deterministic and a randomized online algorithm for the
learning-augmented -CTP that achieve a tradeoff between consistency (quality
of the solution when the prediction is correct) and robustness (quality of the
solution when there are errors in the prediction). Moreover, we prove a
matching lower bound for the deterministic case establishing that the tradeoff
between consistency and robustness is optimal, and show a lower bound for the
randomized algorithm. Finally, we prove several deterministic and randomized
lower bounds on the competitive ratio of -CTP depending on the prediction
error, and complement them, in most cases, with matching upper bounds
Strategyproof Scheduling with Predictions
In their seminal paper that initiated the field of algorithmic mechanism design, Nisan and Ronen [Noam Nisan and Amir Ronen, 1999] studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the makespan. They provided a strategyproof mechanism that achieves an n-approximation and they made the bold conjecture that this is the best approximation achievable by any deterministic strategyproof scheduling mechanism. After more than two decades and several efforts, n remains the best known approximation and very recent work by Christodoulou et al. [George Christodoulou et al., 2021] has been able to prove an ?(?n) approximation lower bound for all deterministic strategyproof mechanisms. This strong negative result, however, heavily depends on the fact that the performance of these mechanisms is evaluated using worst-case analysis. To overcome such overly pessimistic, and often uninformative, worst-case bounds, a surge of recent work has focused on the "learning-augmented framework", whose goal is to leverage machine-learned predictions to obtain improved approximations when these predictions are accurate (consistency), while also achieving near-optimal worst-case approximations even when the predictions are arbitrarily wrong (robustness).
In this work, we study the classic strategic scheduling problem of Nisan and Ronen [Noam Nisan and Amir Ronen, 1999] using the learning-augmented framework and give a deterministic polynomial-time strategyproof mechanism that is 6-consistent and 2n-robust. We thus achieve the "best of both worlds": an O(1) consistency and an O(n) robustness that asymptotically matches the best-known approximation. We then extend this result to provide more general worst-case approximation guarantees as a function of the prediction error. Finally, we complement our positive results by showing that any 1-consistent deterministic strategyproof mechanism has unbounded robustness
On Constructing Spanners from Random Gaussian Projections
Graph sketching is a powerful paradigm for analyzing graph structure via linear measurements introduced by Ahn, Guha, and McGregor (SODA\u2712) that has since found numerous applications in streaming, distributed computing, and massively parallel algorithms, among others. Graph sketching has proven to be quite successful for various problems such as connectivity, minimum spanning trees, edge or vertex connectivity, and cut or spectral sparsifiers. Yet, the problem of approximating shortest path metric of a graph, and specifically computing a spanner, is notably missing from the list of successes. This has turned the status of this fundamental problem into one of the most longstanding open questions in this area.
We present a partial explanation of this lack of success by proving a strong lower bound for a large family of graph sketching algorithms that encompasses prior work on spanners and many (but importantly not also all) related cut-based problems mentioned above. Our lower bound matches the algorithmic bounds of the recent result of Filtser, Kapralov, and Nouri (SODA\u2721), up to lower order terms, for constructing spanners via the same graph sketching family. This establishes near-optimality of these bounds, at least restricted to this family of graph sketching techniques, and makes progress on a conjecture posed in this latter work
- …