93 research outputs found

    The Computational Lens: from Quantum Physics to Neuroscience

    Full text link
    Two transformative waves of computing have redefined the way we approach science. The first wave came with the birth of the digital computer, which enabled scientists to numerically simulate their models and analyze massive datasets. This technological breakthrough led to the emergence of many sub-disciplines bearing the prefix "computational" in their names. Currently, we are in the midst of the second wave, marked by the remarkable advancements in artificial intelligence. From predicting protein structures to classifying galaxies, the scope of its applications is vast, and there can only be more awaiting us on the horizon. While these two waves influence scientific methodology at the instrumental level, in this dissertation, I will present the computational lens in science, aiming at the conceptual level. Specifically, the central thesis posits that computation serves as a convenient and mechanistic language for understanding and analyzing information processing systems, offering the advantages of composability and modularity. This dissertation begins with an illustration of the blueprint of the computational lens, supported by a review of relevant previous work. Subsequently, I will present my own works in quantum physics and neuroscience as concrete examples. In the concluding chapter, I will contemplate the potential of applying the computational lens across various scientific fields, in a way that can provide significant domain insights, and discuss potential future directions.Comment: PhD thesis, Harvard University, Cambridge, Massachusetts, USA. 2023. Some chapters report joint wor

    Tracking the l_2 Norm with Constant Update Time

    Get PDF
    The l_2 tracking problem is the task of obtaining a streaming algorithm that, given access to a stream of items a_1,a_2,a_3,... from a universe [n], outputs at each time t an estimate to the l_2 norm of the frequency vector f^{(t)}in R^n (where f^{(t)}_i is the number of occurrences of item i in the stream up to time t). The previous work [Braverman-Chestnut-Ivkin-Nelson-Wang-Woodruff, PODS 2017] gave a streaming algorithm with (the optimal) space using O(epsilon^{-2}log(1/delta)) words and O(epsilon^{-2}log(1/delta)) update time to obtain an epsilon-accurate estimate with probability at least 1-delta. We give the first algorithm that achieves update time of O(log 1/delta) which is independent of the accuracy parameter epsilon, together with the nearly optimal space using O(epsilon^{-2}log(1/delta)) words. Our algorithm is obtained using the Count Sketch of [Charilkar-Chen-Farach-Colton, ICALP 2002]

    On the Algorithmic Power of Spiking Neural Networks

    Get PDF
    Spiking Neural Networks (SNN) are mathematical models in neuroscience to describe the dynamics among a set of neurons that interact with each other by firing instantaneous signals, a.k.a., spikes. Interestingly, a recent advance in neuroscience [Barrett-Den\`eve-Machens, NIPS 2013] showed that the neurons' firing rate, i.e., the average number of spikes fired per unit of time, can be characterized by the optimal solution of a quadratic program defined by the parameters of the dynamics. This indicated that SNN potentially has the computational power to solve non-trivial quadratic programs. However, the results were justified empirically without rigorous analysis. We put this into the context of natural algorithms and aim to investigate the algorithmic power of SNN. Especially, we emphasize on giving rigorous asymptotic analysis on the performance of SNN in solving optimization problems. To enforce a theoretical study, we first identify a simplified SNN model that is tractable for analysis. Next, we confirm the empirical observation in the work of Barrett et al. by giving an upper bound on the convergence rate of SNN in solving the quadratic program. Further, we observe that in the case where there are infinitely many optimal solutions, SNN tends to converge to the one with smaller l1 norm. We give an affirmative answer to our finding by showing that SNN can solve the l1 minimization problem under some regular conditions. Our main technical insight is a dual view of the SNN dynamics, under which SNN can be viewed as a new natural primal-dual algorithm for the l1 minimization problem. We believe that the dual view is of independent interest and may potentially find interesting interpretation in neuroscience.Comment: To appear in ITCS 201

    Hardness vs Randomness for Bounded Depth Arithmetic Circuits

    Get PDF
    In this paper, we study the question of hardness-randomness tradeoffs for bounded depth arithmetic circuits. We show that if there is a family of explicit polynomials {f_n}, where f_n is of degree O(log^2n/log^2 log n) in n variables such that f_n cannot be computed by a depth Delta arithmetic circuits of size poly(n), then there is a deterministic sub-exponential time algorithm for polynomial identity testing of arithmetic circuits of depth Delta-5. This is incomparable to a beautiful result of Dvir et al.[SICOMP, 2009], where they showed that super-polynomial lower bounds for depth Delta circuits for any explicit family of polynomials (of potentially high degree) implies sub-exponential time deterministic PIT for depth Delta-5 circuits of bounded individual degree. Thus, we remove the "bounded individual degree" condition in the work of Dvir et al. at the cost of strengthening the hardness assumption to hold for polynomials of low degree. The key technical ingredient of our proof is the following property of roots of polynomials computable by a bounded depth arithmetic circuit : if f(x_1, x_2, ..., x_n) and P(x_1, x_2, ..., x_n, y) are polynomials of degree d and r respectively, such that P can be computed by a circuit of size s and depth Delta and P(x_1, x_2, ..., x_n, f) equiv 0, then, f can be computed by a circuit of size poly(n, s, r, d^{O(sqrt{d})}) and depth Delta + 3. In comparison, Dvir et al. showed that f can be computed by a circuit of depth Delta + 3 and size poly(n, s, r, d^{t}), where t is the degree of P in y. Thus, the size upper bound in the work of Dvir et al. is non-trivial when t is small but d could be large, whereas our size upper bound is non-trivial when d is small, but t could be large

    Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits

    Get PDF
    The linear cross-entropy benchmark (Linear XEB) has been used as a test for procedures simulating quantum circuits. Given a quantum circuit CC with nn inputs and outputs and purported simulator whose output is distributed according to a distribution pp over {0,1}n\{0,1\}^n, the linear XEB fidelity of the simulator is FC(p)=2nExpqC(x)1\mathcal{F}_{C}(p) = 2^n \mathbb{E}_{x \sim p} q_C(x) -1 where qC(x)q_C(x) is the probability that xx is output from the distribution C0nC|0^n\rangle. A trivial simulator (e.g., the uniform distribution) satisfies FC(p)=0\mathcal{F}_C(p)=0, while Google's noisy quantum simulation of a 53 qubit circuit CC achieved a fidelity value of (2.24±0.21)×103(2.24\pm0.21)\times10^{-3} (Arute et. al., Nature'19). In this work we give a classical randomized algorithm that for a given circuit CC of depth dd with Haar random 2-qubit gates achieves in expectation a fidelity value of Ω(nL15d)\Omega(\tfrac{n}{L} \cdot 15^{-d}) in running time poly(n,2L)\textsf{poly}(n,2^L). Here LL is the size of the \emph{light cone} of CC: the maximum number of input bits that each output bit depends on. In particular, we obtain a polynomial-time algorithm that achieves large fidelity of ω(1)\omega(1) for depth O(logn)O(\sqrt{\log n}) two-dimensional circuits. To our knowledge, this is the first such result for two dimensional circuits of super-constant depth. Our results can be considered as an evidence that fooling the linear XEB test might be easier than achieving a full simulation of the quantum circuit
    corecore