1,177 research outputs found

    Tracking the l_2 Norm with Constant Update Time

    Get PDF
    The l_2 tracking problem is the task of obtaining a streaming algorithm that, given access to a stream of items a_1,a_2,a_3,... from a universe [n], outputs at each time t an estimate to the l_2 norm of the frequency vector f^{(t)}in R^n (where f^{(t)}_i is the number of occurrences of item i in the stream up to time t). The previous work [Braverman-Chestnut-Ivkin-Nelson-Wang-Woodruff, PODS 2017] gave a streaming algorithm with (the optimal) space using O(epsilon^{-2}log(1/delta)) words and O(epsilon^{-2}log(1/delta)) update time to obtain an epsilon-accurate estimate with probability at least 1-delta. We give the first algorithm that achieves update time of O(log 1/delta) which is independent of the accuracy parameter epsilon, together with the nearly optimal space using O(epsilon^{-2}log(1/delta)) words. Our algorithm is obtained using the Count Sketch of [Charilkar-Chen-Farach-Colton, ICALP 2002]

    The Computational Lens: from Quantum Physics to Neuroscience

    Full text link
    Two transformative waves of computing have redefined the way we approach science. The first wave came with the birth of the digital computer, which enabled scientists to numerically simulate their models and analyze massive datasets. This technological breakthrough led to the emergence of many sub-disciplines bearing the prefix "computational" in their names. Currently, we are in the midst of the second wave, marked by the remarkable advancements in artificial intelligence. From predicting protein structures to classifying galaxies, the scope of its applications is vast, and there can only be more awaiting us on the horizon. While these two waves influence scientific methodology at the instrumental level, in this dissertation, I will present the computational lens in science, aiming at the conceptual level. Specifically, the central thesis posits that computation serves as a convenient and mechanistic language for understanding and analyzing information processing systems, offering the advantages of composability and modularity. This dissertation begins with an illustration of the blueprint of the computational lens, supported by a review of relevant previous work. Subsequently, I will present my own works in quantum physics and neuroscience as concrete examples. In the concluding chapter, I will contemplate the potential of applying the computational lens across various scientific fields, in a way that can provide significant domain insights, and discuss potential future directions.Comment: PhD thesis, Harvard University, Cambridge, Massachusetts, USA. 2023. Some chapters report joint wor

    Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits

    Get PDF
    The linear cross-entropy benchmark (Linear XEB) has been used as a test for procedures simulating quantum circuits. Given a quantum circuit CC with nn inputs and outputs and purported simulator whose output is distributed according to a distribution pp over {0,1}n\{0,1\}^n, the linear XEB fidelity of the simulator is FC(p)=2nExpqC(x)1\mathcal{F}_{C}(p) = 2^n \mathbb{E}_{x \sim p} q_C(x) -1 where qC(x)q_C(x) is the probability that xx is output from the distribution C0nC|0^n\rangle. A trivial simulator (e.g., the uniform distribution) satisfies FC(p)=0\mathcal{F}_C(p)=0, while Google's noisy quantum simulation of a 53 qubit circuit CC achieved a fidelity value of (2.24±0.21)×103(2.24\pm0.21)\times10^{-3} (Arute et. al., Nature'19). In this work we give a classical randomized algorithm that for a given circuit CC of depth dd with Haar random 2-qubit gates achieves in expectation a fidelity value of Ω(nL15d)\Omega(\tfrac{n}{L} \cdot 15^{-d}) in running time poly(n,2L)\textsf{poly}(n,2^L). Here LL is the size of the \emph{light cone} of CC: the maximum number of input bits that each output bit depends on. In particular, we obtain a polynomial-time algorithm that achieves large fidelity of ω(1)\omega(1) for depth O(logn)O(\sqrt{\log n}) two-dimensional circuits. To our knowledge, this is the first such result for two dimensional circuits of super-constant depth. Our results can be considered as an evidence that fooling the linear XEB test might be easier than achieving a full simulation of the quantum circuit

    Hardness vs Randomness for Bounded Depth Arithmetic Circuits

    Get PDF
    In this paper, we study the question of hardness-randomness tradeoffs for bounded depth arithmetic circuits. We show that if there is a family of explicit polynomials {f_n}, where f_n is of degree O(log^2n/log^2 log n) in n variables such that f_n cannot be computed by a depth Delta arithmetic circuits of size poly(n), then there is a deterministic sub-exponential time algorithm for polynomial identity testing of arithmetic circuits of depth Delta-5. This is incomparable to a beautiful result of Dvir et al.[SICOMP, 2009], where they showed that super-polynomial lower bounds for depth Delta circuits for any explicit family of polynomials (of potentially high degree) implies sub-exponential time deterministic PIT for depth Delta-5 circuits of bounded individual degree. Thus, we remove the "bounded individual degree" condition in the work of Dvir et al. at the cost of strengthening the hardness assumption to hold for polynomials of low degree. The key technical ingredient of our proof is the following property of roots of polynomials computable by a bounded depth arithmetic circuit : if f(x_1, x_2, ..., x_n) and P(x_1, x_2, ..., x_n, y) are polynomials of degree d and r respectively, such that P can be computed by a circuit of size s and depth Delta and P(x_1, x_2, ..., x_n, f) equiv 0, then, f can be computed by a circuit of size poly(n, s, r, d^{O(sqrt{d})}) and depth Delta + 3. In comparison, Dvir et al. showed that f can be computed by a circuit of depth Delta + 3 and size poly(n, s, r, d^{t}), where t is the degree of P in y. Thus, the size upper bound in the work of Dvir et al. is non-trivial when t is small but d could be large, whereas our size upper bound is non-trivial when d is small, but t could be large

    Plasma-type gelsolin in subarachnoid hemorrhage: novel biomarker today, therapeutic target tomorrow?

    Get PDF
    There is growing interest in the potential neuroprotective properties of gelsolin. In particular, plasma-type gelsolin (pGSN) can ameliorate deleterious inflammatory response by scavenging pro-inflammatory signals such as actin and lipopolysaccharide. In a recent issue of Critical Care, Pan and colleagues report an important association between pGSN and subarachnoid hemorrhage (SAH) disease severity, and found pGSN to be a novel and promising biomarker for SAH clinical outcome. Previous research shows pGSN may be actively degraded by neurovascular proteases such as matrix metalloproteinases in the cerebral spinal fluid of SAH patients. Taken together, these results suggest that pGSN is not only a novel marker of SAH clinical outcome, but may also play an active mechanistic role in SAH, and potentially serve as a future therapeutic target
    corecore