1,177 research outputs found
Tracking the l_2 Norm with Constant Update Time
The l_2 tracking problem is the task of obtaining a streaming algorithm that, given access to a stream of items a_1,a_2,a_3,... from a universe [n], outputs at each time t an estimate to the l_2 norm of the frequency vector f^{(t)}in R^n (where f^{(t)}_i is the number of occurrences of item i in the stream up to time t). The previous work [Braverman-Chestnut-Ivkin-Nelson-Wang-Woodruff, PODS 2017] gave a streaming algorithm with (the optimal) space using O(epsilon^{-2}log(1/delta)) words and O(epsilon^{-2}log(1/delta)) update time to obtain an epsilon-accurate estimate with probability at least 1-delta. We give the first algorithm that achieves update time of O(log 1/delta) which is independent of the accuracy parameter epsilon, together with the nearly optimal space using O(epsilon^{-2}log(1/delta)) words. Our algorithm is obtained using the Count Sketch of [Charilkar-Chen-Farach-Colton, ICALP 2002]
The Computational Lens: from Quantum Physics to Neuroscience
Two transformative waves of computing have redefined the way we approach
science. The first wave came with the birth of the digital computer, which
enabled scientists to numerically simulate their models and analyze massive
datasets. This technological breakthrough led to the emergence of many
sub-disciplines bearing the prefix "computational" in their names. Currently,
we are in the midst of the second wave, marked by the remarkable advancements
in artificial intelligence. From predicting protein structures to classifying
galaxies, the scope of its applications is vast, and there can only be more
awaiting us on the horizon.
While these two waves influence scientific methodology at the instrumental
level, in this dissertation, I will present the computational lens in science,
aiming at the conceptual level. Specifically, the central thesis posits that
computation serves as a convenient and mechanistic language for understanding
and analyzing information processing systems, offering the advantages of
composability and modularity.
This dissertation begins with an illustration of the blueprint of the
computational lens, supported by a review of relevant previous work.
Subsequently, I will present my own works in quantum physics and neuroscience
as concrete examples. In the concluding chapter, I will contemplate the
potential of applying the computational lens across various scientific fields,
in a way that can provide significant domain insights, and discuss potential
future directions.Comment: PhD thesis, Harvard University, Cambridge, Massachusetts, USA. 2023.
Some chapters report joint wor
Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits
The linear cross-entropy benchmark (Linear XEB) has been used as a test for
procedures simulating quantum circuits. Given a quantum circuit with
inputs and outputs and purported simulator whose output is distributed
according to a distribution over , the linear XEB fidelity of
the simulator is
where is the probability that is output from the distribution
. A trivial simulator (e.g., the uniform distribution) satisfies
, while Google's noisy quantum simulation of a 53 qubit
circuit achieved a fidelity value of (Arute
et. al., Nature'19).
In this work we give a classical randomized algorithm that for a given
circuit of depth with Haar random 2-qubit gates achieves in expectation
a fidelity value of in running time
. Here is the size of the \emph{light cone} of :
the maximum number of input bits that each output bit depends on. In
particular, we obtain a polynomial-time algorithm that achieves large fidelity
of for depth two-dimensional circuits. To our
knowledge, this is the first such result for two dimensional circuits of
super-constant depth. Our results can be considered as an evidence that fooling
the linear XEB test might be easier than achieving a full simulation of the
quantum circuit
Hardness vs Randomness for Bounded Depth Arithmetic Circuits
In this paper, we study the question of hardness-randomness tradeoffs for bounded depth arithmetic circuits. We show that if there is a family of explicit polynomials {f_n}, where f_n is of degree O(log^2n/log^2 log n) in n variables such that f_n cannot be computed by a depth Delta arithmetic circuits of size poly(n), then there is a deterministic sub-exponential time algorithm for polynomial identity testing of arithmetic circuits of depth Delta-5.
This is incomparable to a beautiful result of Dvir et al.[SICOMP, 2009], where they showed that super-polynomial lower bounds for depth Delta circuits for any explicit family of polynomials (of potentially high degree) implies sub-exponential time deterministic PIT for depth Delta-5 circuits of bounded individual degree. Thus, we remove the "bounded individual degree" condition in the work of Dvir et al. at the cost of strengthening the hardness assumption to hold for polynomials of low degree.
The key technical ingredient of our proof is the following property of roots of polynomials computable by a bounded depth arithmetic circuit : if f(x_1, x_2, ..., x_n) and P(x_1, x_2, ..., x_n, y) are polynomials of degree d and r respectively, such that P can be computed by a circuit of size s and depth Delta and P(x_1, x_2, ..., x_n, f) equiv 0, then, f can be computed by a circuit of size poly(n, s, r, d^{O(sqrt{d})}) and depth Delta + 3. In comparison, Dvir et al. showed that f can be computed by a circuit of depth Delta + 3 and size poly(n, s, r, d^{t}), where t is the degree of P in y. Thus, the size upper bound in the work of Dvir et al. is non-trivial when t is small but d could be large, whereas our size upper bound is non-trivial when d is small, but t could be large
Plasma-type gelsolin in subarachnoid hemorrhage: novel biomarker today, therapeutic target tomorrow?
There is growing interest in the potential neuroprotective properties of gelsolin. In particular, plasma-type gelsolin (pGSN) can ameliorate deleterious inflammatory response by scavenging pro-inflammatory signals such as actin and lipopolysaccharide. In a recent issue of Critical Care, Pan and colleagues report an important association between pGSN and subarachnoid hemorrhage (SAH) disease severity, and found pGSN to be a novel and promising biomarker for SAH clinical outcome. Previous research shows pGSN may be actively degraded by neurovascular proteases such as matrix metalloproteinases in the cerebral spinal fluid of SAH patients. Taken together, these results suggest that pGSN is not only a novel marker of SAH clinical outcome, but may also play an active mechanistic role in SAH, and potentially serve as a future therapeutic target
- …