19,768 research outputs found

    Divide-and-Conquer Method for Instanton Rate Theory

    Full text link
    Ring-polymer instanton theory has been developed to simulate the quantum dynamics of molecular systems at low temperatures. Chemical reaction rates can be obtained by locating the dominant tunneling pathway and analyzing fluctuations around it. In the standard method, calculating the fluctuation terms involves the diagonalization of a large matrix, which can be unfeasible for large systems with a high number of ring-polymer beads. Here we present a method for computing the instanton fluctuations with a large reduction in computational scaling. This method is applied to three reactions described by fitted, analytic and on-the-fly ab initio potential-energy surfaces and is shown to be numerically stable for the calculation of thermal reaction rates even at very low temperature

    Stable Recovery Of Sparse Vectors From Random Sinusoidal Feature Maps

    Full text link
    Random sinusoidal features are a popular approach for speeding up kernel-based inference in large datasets. Prior to the inference stage, the approach suggests performing dimensionality reduction by first multiplying each data vector by a random Gaussian matrix, and then computing an element-wise sinusoid. Theoretical analysis shows that collecting a sufficient number of such features can be reliably used for subsequent inference in kernel classification and regression. In this work, we demonstrate that with a mild increase in the dimension of the embedding, it is also possible to reconstruct the data vector from such random sinusoidal features, provided that the underlying data is sparse enough. In particular, we propose a numerically stable algorithm for reconstructing the data vector given the nonlinear features, and analyze its sample complexity. Our algorithm can be extended to other types of structured inverse problems, such as demixing a pair of sparse (but incoherent) vectors. We support the efficacy of our approach via numerical experiments

    Detectability thresholds and optimal algorithms for community structure in dynamic networks

    Get PDF
    We study the fundamental limits on learning latent community structure in dynamic networks. Specifically, we study dynamic stochastic block models where nodes change their community membership over time, but where edges are generated independently at each time step. In this setting (which is a special case of several existing models), we are able to derive the detectability threshold exactly, as a function of the rate of change and the strength of the communities. Below this threshold, we claim that no algorithm can identify the communities better than chance. We then give two algorithms that are optimal in the sense that they succeed all the way down to this limit. The first uses belief propagation (BP), which gives asymptotically optimal accuracy, and the second is a fast spectral clustering algorithm, based on linearizing the BP equations. We verify our analytic and algorithmic results via numerical simulation, and close with a brief discussion of extensions and open questions.Comment: 9 pages, 3 figure
    corecore