27 research outputs found

    Metropolis Integration Schemes for Self-Adjoint Diffusions

    Full text link
    We present explicit methods for simulating diffusions whose generator is self-adjoint with respect to a known (but possibly not normalizable) density. These methods exploit this property and combine an optimized Runge-Kutta algorithm with a Metropolis-Hastings Monte-Carlo scheme. The resulting numerical integration scheme is shown to be weakly accurate at finite noise and to gain higher order accuracy in the small noise limit. It also permits to avoid computing explicitly certain terms in the equation, such as the divergence of the mobility tensor, which can be tedious to calculate. Finally, the scheme is shown to be ergodic with respect to the exact equilibrium probability distribution of the diffusion when it exists. These results are illustrated on several examples including a Brownian dynamics simulation of DNA in a solvent. In this example, the proposed scheme is able to accurately compute dynamics at time step sizes that are an order of magnitude (or more) larger than those permitted with commonly used explicit predictor-corrector schemes.Comment: 54 pages, 8 figures, To appear in MM

    Static Data Structure Lower Bounds Imply Rigidity

    Full text link
    We show that static data structure lower bounds in the group (linear) model imply semi-explicit lower bounds on matrix rigidity. In particular, we prove that an explicit lower bound of tω(log2n)t \geq \omega(\log^2 n) on the cell-probe complexity of linear data structures in the group model, even against arbitrarily small linear space (s=(1+ε)n)(s= (1+\varepsilon)n), would already imply a semi-explicit (PNP\bf P^{NP}\rm) construction of rigid matrices with significantly better parameters than the current state of art (Alon, Panigrahy and Yekhanin, 2009). Our results further assert that polynomial (tnδt\geq n^{\delta}) data structure lower bounds against near-optimal space, would imply super-linear circuit lower bounds for log-depth linear circuits (a four-decade open question). In the succinct space regime (s=n+o(n))(s=n+o(n)), we show that any improvement on current cell-probe lower bounds in the linear model would also imply new rigidity bounds. Our results rely on a new connection between the "inner" and "outer" dimensions of a matrix (Paturi and Pudlak, 2006), and on a new reduction from worst-case to average-case rigidity, which is of independent interest

    LL(1) Parsing with Derivatives and Zippers

    Full text link
    In this paper, we present an efficient, functional, and formally verified parsing algorithm for LL(1) context-free expressions based on the concept of derivatives of formal languages. Parsing with derivatives is an elegant parsing technique, which, in the general case, suffers from cubic worst-case time complexity and slow performance in practice. We specialise the parsing with derivatives algorithm to LL(1) context-free expressions, where alternatives can be chosen given a single token of lookahead. We formalise the notion of LL(1) expressions and show how to efficiently check the LL(1) property. Next, we present a novel linear-time parsing with derivatives algorithm for LL(1) expressions operating on a zipper-inspired data structure. We prove the algorithm correct in Coq and present an implementation as a parser combinators framework in Scala, with enumeration and pretty printing capabilities.Comment: Appeared at PLDI'20 under the title "Zippy LL(1) Parsing with Derivatives

    Requirements Completeness:

    No full text
    A process for determining requirements completeness is developed. The method is comprised of three steps: (1) defining the problem to be solved by identifying and quantifying all system interfaces associated with the system development, operational, and maintenance concepts, (2) producing the requirements by analyzing the system interfaces to determine requirements under all conditions, and (3) verifying requirements completeness using the method of complementary antecedents (Carson 1995). The process allows one to demonstrate that requirements are complete for the associated mission (problem) statement(s)

    Security Bounds for the NIST Codebook-based

    No full text
    The NIST codebook-based deterministic random bit generators are analyzed in the context of being indistinguishable from random. Upper and lower bounds based on the probability of distinguishing the output are proven. These bounds imply that the security of the designs are bounded by the codebook width, or more precisely on the property that the codebooks act like a random permutation, as opposed to their underlying security parameter or key length. This paper concludes that these designs fail to support security parameters larger than the codebook width
    corecore