14 research outputs found
Finite-Temperature Dynamics and Thermal Intraband Magnon Scattering in Haldane Spin-One Chains
The antiferromagnetic spin-one chain is considerably one of the most
fundamental quantum many-body systems, with symmetry protected topological
order in the ground state. Here, we present results for its dynamical spin
structure factor at finite temperatures, based on a combination of exact
numerical diagonalization, matrix-product-state calculations and quantum Monte
Carlo simulations. Open finite chains exhibit a sub-gap band in the thermal
spectral functions, indicative of localized edge-states. Moreover, we observe
the thermal activation of a distinct low-energy continuum contribution to the
spin spectral function with an enhanced spectral weight at low momenta and its
upper threshold. This emerging thermal spectral feature of the Haldane spin-one
chain is shown to result from intra-band magnon scattering due to the thermal
population of the single-magnon branch, which features a large bandwidth-to-gap
ratio. These findings are discussed with respect to possible future studies on
spin-one chain compounds based on inelastic neutron scattering.Comment: 10 pages with 11 figures total (including Supplemental Material);
changes in v2: new Figs. S1 and S5, Fig. S3 expanded + related discussion +
many smaller modifications to match published versio
A Framework for Searching in Graphs in the Presence of Errors
We consider a problem of searching for an unknown target vertex t in a (possibly edge-weighted) graph. Each vertex-query points to a vertex v and the response either admits that v is the target or provides any neighbor s of v that lies on a shortest path from v to t. This model has been introduced for trees by Onak and Parys [FOCS 2006] and for general graphs by Emamjomeh-Zadeh et al. [STOC 2016]. In the latter, the authors provide algorithms for the error-less case and for the independent noise model (where each query independently receives an erroneous answer with known probability p<1/2 and a correct one with probability 1-p).
We study this problem both with adversarial errors and independent noise models. First, we show an algorithm that needs at most (log_2 n)/(1 - H(r)) queries in case of adversarial errors, where the adversary is bounded with its rate of errors by a known constant r<1/2. Our algorithm is in fact a simplification of previous work, and our refinement lies in invoking an amortization argument. We then show that our algorithm coupled with a Chernoff bound argument leads to a simpler algorithm for the independent noise model and has a query complexity that is both simpler and asymptotically better than the one of Emamjomeh-Zadeh et al. [STOC 2016].
Our approach has a wide range of applications. First, it improves and simplifies the Robust Interactive Learning framework proposed by Emamjomeh-Zadeh and Kempe [NIPS 2017]. Secondly, performing analogous analysis for edge-queries (where a query to an edge e returns its endpoint that is closer to the target) we actually recover (as a special case) a noisy binary search algorithm that is asymptotically optimal, matching the complexity of Feige et al. [SIAM J. Comput. 1994]. Thirdly, we improve and simplify upon an algorithm for searching of unbounded domains due to Aslam and Dhagat [STOC 1991]
SOS Degree Reduction with Applications to Clustering and Robust Moment Estimation
Understanding the exact complexity and giving strong algorithmic guarantees for estimation and inference problems is one of the key challenges currently faced in computer science, especially in the high-dimensional setting. Over the past few years a flurry of work has surfaced which harnesses results and techniques previously applied to classical problems in theoretical computer science to attack these challenges. One particularly promising candidate is the use of the so-called sum-of-squares hierarchy which is a powerful tool predominantly used in the study of approximation algorithms.
It turns out that a great deal of the essential facts underlying many estimation problems, e.g., distributional assumptions, can be phrased as polynomial (in-)equalities. The sum-of-squares hierarchy is built on so-called sum-of-squares proofs which allow one to reason about these inequalities. Furthermore, they can be algorithmically exploited to yield efficient algorithms, i.e., running in time at most polynomial in the size of the input. The exact running time is mostly determined by the degree used in these inequalities and hence, reducing this as much as possible is key to finding fast algorithms.
In this thesis, we develop a general approach to significantly reduce the degree of sum-of-squares proofs by introducing new variables. To illustrate the power of our approach, we use it to speed up previous algorithms based on sum-of-squares for two important estimation problems, clustering and robust moment estimation. The resulting algorithms have significantly faster running times than the previous state-of-the-art. We further show that some of our guarantees are information-theoretically optimal for the cases we consider. Moreover, we give an improved version of quite a general theorem given by Raghavendra et al. allowing us to easily turn the existence of sum-of- squares proofs into efficient estimation algorithms. Roughly speaking, given a sample of n points in dimensions d, our algorithms can exploit order-l moments in time d^O(l) * n^O(1), whereas a naive implementation requires time (d * n)^O(l). Since for the aforementioned applications, the typical sample size is d^Θ(l), our framework improves running times from d^O(l2) to d^O(l). We hope that our approach can inform the design of future algorithms based on sum-of-squares
Optimal SQ Lower Bounds for Learning Halfspaces with Massart Noise
We give tight statistical query (SQ) lower bounds for learnining halfspaces in the presence of Massart noise. In particular, suppose that all labels are corrupted with probability at most \eta. We show that for arbitrary \eta in [0,1/2] every SQ algorithm achieving misclassification error better than \eta requires queries of superpolynomial accuracy or at least a superpolynomial number of queries. Further, this continues to hold even if the information-theoretically optimal error OPT is as small as exp(−log^c(d)), where d is the dimension and 0 < c < 1 is an arbitrary absolute constant, and an overwhelming fraction of examples are noiseless. Our lower bound matches known polynomial time algorithms, which are also implementable in the SQ framework. Previously, such lower bounds only ruled out algorithms achieving error OPT+\epsilon or error better than \Omega(\eta) or, if \eta is close to 1/2, error \eta- o(1), where the term o(1)is constant in d but going to 0 for \eta approaching 1/2. As a consequence, we also show that achieving misclassification error better than 1/2 in the (A,\alpha)-Tsybakov model is SQ-hard for A constant and \alpha bounded away from 1.ISSN:2640-349
A Framework for Searching in Graphs in the Presence of Errors
We consider a problem of searching for an unknown target vertex t in a (possibly edge-weighted) graph. Each vertex-query points to a vertex v and the response either admits that v is the target or provides any neighbor s of v that lies on a shortest path from v to t. This model has been introduced for trees by Onak and Parys [FOCS 2006] and for general graphs by Emamjomeh-Zadeh et al. [STOC 2016]. In the latter, the authors provide algorithms for the error-less case and for the independent noise model (where each query independently receives an erroneous answer with known probability p<1/2 and a correct one with probability 1-p). We study this problem both with adversarial errors and independent noise models. First, we show an algorithm that needs at most (log_2 n)/(1 - H(r)) queries in case of adversarial errors, where the adversary is bounded with its rate of errors by a known constant r<1/2. Our algorithm is in fact a simplification of previous work, and our refinement lies in invoking an amortization argument. We then show that our algorithm coupled with a Chernoff bound argument leads to a simpler algorithm for the independent noise model and has a query complexity that is both simpler and asymptotically better than the one of Emamjomeh-Zadeh et al. [STOC 2016]. Our approach has a wide range of applications. First, it improves and simplifies the Robust Interactive Learning framework proposed by Emamjomeh-Zadeh and Kempe [NIPS 2017]. Secondly, performing analogous analysis for edge-queries (where a query to an edge e returns its endpoint that is closer to the target) we actually recover (as a special case) a noisy binary search algorithm that is asymptotically optimal, matching the complexity of Feige et al. [SIAM J. Comput. 1994]. Thirdly, we improve and simplify upon an algorithm for searching of unbounded domains due to Aslam and Dhagat [STOC 1991].ISSN:2190-680
Fast algorithm for overcomplete order-3 tensor decomposition
We develop the first fast spectral algorithm to decompose a random third-order tensor over of rank up to O(d3/2/polylog(d)). Our algorithm only involves simple linear algebra operations and can recover all components in time O(d6.05) under the current matrix multiplication time. Prior to this work, comparable guarantees could only be achieved via sum-of-squares [Ma, Shi, Steurer 2016]. In contrast, fast algorithms [Hopkins, Schramm, Shi, Steurer 2016] could only decompose tensors of rank at most O(d4/3/polylog(d)). Our algorithmic result rests on two key ingredients. A clean lifting of the third-order tensor to a sixth-order tensor, which can be expressed in the language of tensor networks. A careful decomposition of the tensor network into a sequence of rectangular matrix multiplications, which allows us to have a fast implementation of the algorithm.ISSN:2640-349