153 research outputs found

    Lower Bounds for (Non-Monotone) Comparator Circuits

    Get PDF
    Comparator circuits are a natural circuit model for studying the concept of bounded fan-out computations, which intuitively corresponds to whether or not a computational model can make "copies" of intermediate computational steps. Comparator circuits are believed to be weaker than general Boolean circuits, but they can simulate Branching Programs and Boolean formulas. In this paper we prove the first superlinear lower bounds in the general (non-monotone) version of this model for an explicitly defined function. More precisely, we prove that the n-bit Element Distinctness function requires ?((n/ log n)^(3/2)) size comparator circuits

    Local-To-Global Agreement Expansion via the Variance Method

    Get PDF
    Agreement expansion is concerned with set systems for which local assignments to the sets with almost perfect pairwise consistency (i.e., most overlapping pairs of sets agree on their intersections) implies the existence of a global assignment to the ground set (from which the sets are defined) that agrees with most of the local assignments. It is currently known that if a set system forms a two-sided or a partite high dimensional expander then agreement expansion is implied. However, it was not known whether agreement expansion can be implied for one-sided high dimensional expanders. In this work we show that agreement expansion can be deduced for one-sided high dimensional expanders assuming that all the vertices\u27 links (i.e., the neighborhoods of the vertices) are agreement expanders. Thus, for one-sided high dimensional expander, an agreement expansion of the large complicated complex can be deduced from agreement expansion of its small simple links. Using our result, we settle the open question whether the well studied Ramanujan complexes are agreement expanders. These complexes are neither partite nor two-sided high dimensional expanders. However, they are one-sided high dimensional expanders for which their links are partite and hence are agreement expanders. Thus, our result implies that Ramanujan complexes are agreement expanders, answering affirmatively the aforementioned open question. The local-to-global agreement expansion that we prove is based on the variance method that we develop. We show that for a high dimensional expander, if we define a function on its top faces and consider its local averages over the links then the variance of these local averages is much smaller than the global variance of the original function. This decreasing in the variance enables us to construct one global agreement function that ties together all local agreement functions

    Fine-Grained Reductions and Quantum Speedups for Dynamic Programming

    Get PDF
    This paper points at a connection between certain (classical) fine-grained reductions and the question: Do quantum algorithms offer an advantage for problems whose (classical) best solution is via dynamic programming? A remarkable recent result of Ambainis et al. [SODA 2019] indicates that the answer is positive for some fundamental problems such as Set-Cover and Travelling Salesman. They design a quantum O^*(1.728^n) time algorithm whereas the dynamic programming O^*(2^n) time algorithms are conjectured to be classically optimal. In this paper, fine-grained reductions are extracted from their algorithms giving the first lower bounds for problems in P that are based on the intriguing Set-Cover Conjecture (SeCoCo) of Cygan et al. [CCC 2010]. In particular, the SeCoCo implies: - a super-linear Omega(n^{1.08}) lower bound for 3-SUM on n integers, - an Omega(n^{k/(c_k)-epsilon}) lower bound for k-SUM on n integers and k-Clique on n-node graphs, for any integer k >= 3, where c_k <= log_2{k}+1.4427. While far from being tight, these lower bounds are significantly stronger than what is known to follow from the Strong Exponential Time Hypothesis (SETH); the well-known n^{Omega(k)} ETH-based lower bounds for k-Clique and k-SUM are vacuous when k is constant. Going in the opposite direction, this paper observes that some "sequential" problems with previously known fine-grained reductions to a "parallelizable" core also enjoy quantum speedups over their classical dynamic programming solutions. Examples include RNA Folding and Least-Weight Subsequence

    Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons

    Get PDF
    We consider the task of measuring time with probabilistic threshold gates implemented by bio-inspired spiking neurons. In the model of spiking neural networks, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect. Discovering the underlying mechanisms by which the brain perceives the duration of time is one of the largest open enigma in computational neuro-science. To gain a better algorithmic understanding onto these processes, we introduce the neural timer problem. In this problem, one is given a time parameter t, an input neuron x, and an output neuron y. It is then required to design a minimum sized neural network (measured by the number of auxiliary neurons) in which every spike from x in a given round i, makes the output y fire for the subsequent t consecutive rounds. We first consider a deterministic implementation of a neural timer and show that Theta(log t) (deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron y is required to fire for t consecutive rounds with probability at least 1-delta, and should stop firing after at most 2t rounds with probability 1-delta for some input parameter delta in (0,1). Our key result is a construction of a neural timer with O(log log 1/delta) spiking neurons. Interestingly, this construction uses only one spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of Omega(min{log log 1/delta, log t}) neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks. Finally, we demonstrate the usefulness of compressed counting networks for synchronizing neural networks. In the spirit of distributed synchronizers [Awerbuch-Peleg, FOCS\u2790], we provide a general transformation (or simulation) that can take any synchronized network solution and simulate it in an asynchronous setting (where edges have arbitrary response latencies) while incurring a small overhead w.r.t the number of neurons and computation time

    Spiking Neural Networks Through the Lens of Streaming Algorithms

    Get PDF
    We initiate the study of biological neural networks from the perspective of streaming algorithms. Like computers, human brains suffer from memory limitations which pose a significant obstacle when processing large scale and dynamically changing data. In computer science, these challenges are captured by the well-known streaming model, which can be traced back to Munro and Paterson `78 and has had significant impact in theory and beyond. In the classical streaming setting, one must compute some function ff of a stream of updates S={u1,…,um}\mathcal{S} = \{u_1,\ldots,u_m\}, given restricted single-pass access to the stream. The primary complexity measure is the space used by the algorithm. We take the first steps towards understanding the connection between streaming and neural algorithms. On the upper bound side, we design neural algorithms based on known streaming algorithms for fundamental tasks, including distinct elements, approximate median, heavy hitters, and more. The number of neurons in our neural solutions almost matches the space bounds of the corresponding streaming algorithms. As a general algorithmic primitive, we show how to implement the important streaming technique of linear sketching efficient in spiking neural networks. On the lower bound side, we give a generic reduction, showing that any space-efficient spiking neural network can be simulated by a space-efficiently streaming algorithm. This reduction lets us translate streaming-space lower bounds into nearly matching neural-space lower bounds, establishing a close connection between these two models.Comment: To appear in DISC'20, shorten abstrac

    Robust Algorithms Under Adversarial Injections

    Get PDF
    In this paper, we study streaming and online algorithms in the context of randomness in the input. For several problems, a random order of the input sequence - as opposed to the worst-case order - appears to be a necessary evil in order to prove satisfying guarantees. However, algorithmic techniques that work under this assumption tend to be vulnerable to even small changes in the distribution. For this reason, we propose a new adversarial injections model, in which the input is ordered randomly, but an adversary may inject misleading elements at arbitrary positions. We believe that studying algorithms under this much weaker assumption can lead to new insights and, in particular, more robust algorithms. We investigate two classical combinatorial-optimization problems in this model: Maximum matching and cardinality constrained monotone submodular function maximization. Our main technical contribution is a novel streaming algorithm for the latter that computes a 0.55-approximation. While the algorithm itself is clean and simple, an involved analysis shows that it emulates a subdivision of the input stream which can be used to greatly limit the power of the adversary

    Preference-Informed Fairness

    Get PDF
    We study notions of fairness in decision-making systems when individuals have diverse preferences over the possible outcomes of the decisions. Our starting point is the seminal work of Dwork et al. which introduced a notion of individual fairness (IF): given a task-specific similarity metric, every pair of individuals who are similarly qualified according to the metric should receive similar outcomes. We show that when individuals have diverse preferences over outcomes, requiring IF may unintentionally lead to less-preferred outcomes for the very individuals that IF aims to protect. A natural alternative to IF is the classic notion of fair division, envy-freeness (EF): no individual should prefer another individual's outcome over their own. Although EF allows for solutions where all individuals receive a highly-preferred outcome, EF may also be overly-restrictive. For instance, if many individuals agree on the best outcome, then if any individual receives this outcome, they all must receive it, regardless of each individual's underlying qualifications for the outcome. We introduce and study a new notion of preference-informed individual fairness (PIIF) that is a relaxation of both individual fairness and envy-freeness. At a high-level, PIIF requires that outcomes satisfy IF-style constraints, but allows for deviations provided they are in line with individuals' preferences. We show that PIIF can permit outcomes that are more favorable to individuals than any IF solution, while providing considerably more flexibility to the decision-maker than EF. In addition, we show how to efficiently optimize any convex objective over the outcomes subject to PIIF for a rich class of individual preferences. Finally, we demonstrate the broad applicability of the PIIF framework by extending our definitions and algorithms to the multiple-task targeted advertising setting introduced by Dwork and Ilvento

    On the Pseudo-Deterministic Query Complexity of NP Search Problems

    Get PDF
    We study pseudo-deterministic query complexity - randomized query algorithms that are required to output the same answer with high probability on all inputs. We prove Ω(√n) lower bounds on the pseudo-deterministic complexity of a large family of search problems based on unsatisfiable random CNF instances, and also for the promise problem (FIND1) of finding a 1 in a vector populated with at least half one’s. This gives an exponential separation between randomized query complexity and pseudo-deterministic complexity, which is tight in the quantum setting. As applications we partially solve a related combinatorial coloring problem, and we separate random tree-like Resolution from its pseudo-deterministic version. In contrast to our lower bound, we show, surprisingly, that in the zero-error, average case setting, the three notions (deterministic, randomized, pseudo-deterministic) collapse

    PCPs and Instance Compression from a Cryptographic Lens

    Get PDF
    Modern cryptography fundamentally relies on the assumption that the adversary trying to break the scheme is computationally bounded. This assumption lets us construct cryptographic protocols and primitives that are known to be impossible otherwise. In this work we explore the effect of bounding the adversary\u27s power in other information theoretic proof-systems and show how to use this assumption to bypass impossibility results. We first consider the question of constructing succinct PCPs. These are PCPs whose length is polynomial only in the length of the original NP witness (in contrast to standard PCPs whose length is proportional to the non-deterministic verification time). Unfortunately, succinct PCPs are known to be impossible to construct under standard complexity assumptions. Assuming the sub-exponential hardness of the learning with errors (LWE) problem, we construct succinct probabilistically checkable arguments or PCAs (Zimand 2001, Kalai and Raz 2009), which are PCPs in which soundness is guaranteed against efficiently generated false proofs. Our PCA construction is for every NP relation that can be verified by a small-depth circuit (e.g., SAT, clique, TSP, etc.) and in contrast to prior work is publicly verifiable and has constant query complexity. Curiously, we also show, as a proof-of-concept, that such publicly-verifiable PCAs can be used to derive hardness of approximation results. Second, we consider the notion of Instance Compression (Harnik and Naor, 2006). An instance compression scheme lets one compress, for example, a CNF formula φ\varphi on mm variables and n≫mn \gg m clauses to a new formula φ2˘7\varphi\u27 with only poly(m)poly(m) clauses, so that φ\varphi is satisfiable if and only if φ2˘7\varphi\u27 is satisfiable. Instance compression has been shown to be closely related to succinct PCPs and is similarly highly unlikely to exist. We introduce a computational analog of instance compression in which we require that if φ\varphi is unsatisfiable then φ2˘7\varphi\u27 is effectively unsatisfiable, in the sense that it is computationally infeasible to find a satisfying assignment for φ2˘7\varphi\u27 (although such an assignment may exist). Assuming the same sub-exponential LWE assumption, we construct such computational instance compression schemes for every bounded-depth NP relation. As an application, this lets one compress kk formulas ϕ1,…,ϕk\phi_1,\dots,\phi_k into a single short formula ϕ\phi that is effectively satisfiable if and only if at least one of the original formulas was satisfiable
    • …
    corecore