4,701 research outputs found
Recommended from our members
Pattern recognition in the nucleation kinetics of non-equilibrium self-assembly
Inspired by biology’s most sophisticated computer, the brain, neural networks constitute a profound reformulation of computational principles. Analogous high-dimensional, highly interconnected computational architectures also arise within information-processing molecular systems inside living cells, such as signal transduction cascades and genetic regulatory networks. Might collective modes analogous to neural computation be found more broadly in other physical and chemical processes, even those that ostensibly play non-information-processing roles? Here we examine nucleation during self-assembly of multicomponent structures, showing that high-dimensional patterns of concentrations can be discriminated and classified in a manner similar to neural network computation. Specifically, we design a set of 917 DNA tiles that can self-assemble in three alternative ways such that competitive nucleation depends sensitively on the extent of colocalization of high-concentration tiles within the three structures. The system was trained in silico to classify a set of 18 grayscale 30 × 30 pixel images into three categories. Experimentally, fluorescence and atomic force microscopy measurements during and after a 150 hour anneal established that all trained images were correctly classified, whereas a test set of image variations probed the robustness of the results. Although slow compared to previous biochemical neural networks, our approach is compact, robust and scalable. Our findings suggest that ubiquitous physical phenomena, such as nucleation, may hold powerful information-processing capabilities when they occur within high-dimensional multicomponent systems
Classical and quantum algorithms for scaling problems
This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Slopes of modular forms and geometry of eigencurves
Under a stronger genericity condition, we prove the local analogue of ghost
conjecture of Bergdall and Pollack. As applications, we deduce in this case (a)
a folklore conjecture of Breuil--Buzzard--Emerton on the crystalline slopes of
Kisin's crystabelian deformation spaces, (b) Gouvea's
-conjecture on slopes of modular forms, and (c)
the finiteness of irreducible components of the eigencurve. In addition,
applying combinatorial arguments by Bergdall and Pollack, and by Ren, we deduce
as corollaries in the reducible and strongly generic case, (d) Gouvea--Mazur
conjecture, (e) a variant of Gouvea's conjecture on slope distributions, and
(f) a refined version of Coleman's spectral halo conjecture.Comment: 97 pages; comments are welcom
A note on the computational complexity of the moment-SOS hierarchy for polynomial optimization
The moment-sum-of-squares (moment-SOS) hierarchy is one of the most
celebrated and widely applied methods for approximating the minimum of an
n-variate polynomial over a feasible region defined by polynomial
(in)equalities. A key feature of the hierarchy is that, at a fixed level, it
can be formulated as a semidefinite program of size polynomial in the number of
variables n. Although this suggests that it may therefore be computed in
polynomial time, this is not necessarily the case. Indeed, as O'Donnell (2017)
and later Raghavendra & Weitz (2017) show, there exist examples where the
sos-representations used in the hierarchy have exponential bit-complexity. We
study the computational complexity of the moment-SOS hierarchy, complementing
and expanding upon earlier work of Raghavendra & Weitz (2017). In particular,
we establish algebraic and geometric conditions under which polynomial-time
computation is guaranteed to be possible.Comment: 10 page
Exploiting Neighborhood Interference with Low Order Interactions under Unit Randomized Design
Network interference, where the outcome of an individual is affected by the
treatment assignment of those in their social network, is pervasive in
real-world settings. However, it poses a challenge to estimating causal
effects. We consider the task of estimating the total treatment effect (TTE),
or the difference between the average outcomes of the population when everyone
is treated versus when no one is, under network interference. Under a Bernoulli
randomized design, we provide an unbiased estimator for the TTE when network
interference effects are constrained to low order interactions among neighbors
of an individual. We make no assumptions on the graph other than bounded
degree, allowing for well-connected networks that may not be easily clustered.
We derive a bound on the variance of our estimator and show in simulated
experiments that it performs well compared with standard estimators for the
TTE. We also derive a minimax lower bound on the mean squared error of our
estimator which suggests that the difficulty of estimation can be characterized
by the degree of interactions in the potential outcomes model. We also prove
that our estimator is asymptotically normal under boundedness conditions on the
network degree and potential outcomes model. Central to our contribution is a
new framework for balancing model flexibility and statistical complexity as
captured by this low order interactions structure.Comment: 42 pages including citations and appendix, 2 figures (total of 12
subfigures
Polynomial Identity Testing and the Ideal Proof System: PIT is in NP if and only if IPS can be p-simulated by a Cook-Reckhow proof system
The Ideal Proof System (IPS) of Grochow & Pitassi (FOCS 2014, J. ACM, 2018)
is an algebraic proof system that uses algebraic circuits to refute the
solvability of unsatisfiable systems of polynomial equations. One potential
drawback of IPS is that verifying an IPS proof is only known to be doable using
Polynomial Identity Testing (PIT), which is solvable by a randomized algorithm,
but whose derandomization, even into NSUBEXP, is equivalent to strong lower
bounds. However, the circuits that are used in IPS proofs are not arbitrary,
and it is conceivable that one could get around general PIT by leveraging some
structure in these circuits. This proposal may be even more tempting when IPS
is used as a proof system for Boolean Unsatisfiability, where the equations
themselves have additional structure.
Our main result is that, on the contrary, one cannot get around PIT as above:
we show that IPS, even as a proof system for Boolean Unsatisfiability, can be
p-simulated by a deterministically verifiable (Cook-Reckhow) proof system if
and only if PIT is in NP. We use our main result to propose a potentially new
approach to derandomizing PIT into NP
Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth
Dynamic programming on various graph decompositions is one of the most
fundamental techniques used in parameterized complexity. Unfortunately, even if
we consider concepts as simple as path or tree decompositions, such dynamic
programming uses space that is exponential in the decomposition's width, and
there are good reasons to believe that this is necessary. However, it has been
shown that in graphs of low treedepth it is possible to design algorithms which
achieve polynomial space complexity without requiring worse time complexity
than their counterparts working on tree decompositions of bounded width. Here,
treedepth is a graph parameter that, intuitively speaking, takes into account
both the depth and the width of a tree decomposition of the graph, rather than
the width alone.
Motivated by the above, we consider graphs that admit clique expressions with
bounded depth and label count, or equivalently, graphs of low shrubdepth (sd).
Here, sd is a bounded-depth analogue of cliquewidth, in the same way as td is a
bounded-depth analogue of treewidth. We show that also in this setting,
bounding the depth of the decomposition is a deciding factor for improving the
space complexity. Precisely, we prove that on -vertex graphs equipped with a
tree-model (a decomposition notion underlying sd) of depth and using
labels, we can solve
- Independent Set in time using
space;
- Max Cut in time using space; and
- Dominating Set in time using space via
a randomized algorithm.
We also establish a lower bound, conditional on a certain assumption about
the complexity of Longest Common Subsequence, which shows that at least in the
case of IS the exponent of the parametric factor in the time complexity has to
grow with if one wishes to keep the space complexity polynomial.Comment: Conference version to appear at the European Symposium on Algorithms
(ESA 2023
A top-down approach to algebraic renormalization in regularity structures based on multi-indices
We provide an algebraic framework to describe renormalization in regularity
structures based on multi-indices for a large class of semi-linear stochastic
PDEs. This framework is ``top-down", in the sense that we postulate the form of
the counterterm and use the renormalized equation to build a canonical smooth
model for it. The core of the construction is a generalization of the Hopf
algebra of derivations in [LOT23], which is extended beyond the structure group
to describe the model equation via an exponential map: This allows to implement
a renormalization procedure which resembles the preparation map approach in our
context.Comment: 65 page
Non-perturbative topological string theory on compact Calabi-Yau 3-folds
We obtain analytic and numerical results for the non-perturbative amplitudes
of topological string theory on arbitrary, compact Calabi-Yau manifolds. Our
approach is based on the theory of resurgence and extends previous special
results to the more general case. In particular, we obtain explicit
trans-series solutions of the holomorphic anomaly equations. Our results
predict the all orders, large genus asymptotics of the topological string free
energies, which we test in detail against high genus perturbative series
obtained recently in the compact case. We also provide additional evidence that
the Stokes constants appearing in the resurgent structure are closely related
to integer BPS invariants.Comment: 85 pages, 16 figures, 15 table
- …