46 research outputs found

    Convergence Thresholds of Newton's Method for Monotone Polynomial Equations

    Get PDF
    Monotone systems of polynomial equations (MSPEs) are systems of fixed-point equations X1=f1(X1,...,Xn),X_1 = f_1(X_1, ..., X_n), ...,Xn=fn(X1,...,Xn)..., X_n = f_n(X_1, ..., X_n) where each fif_i is a polynomial with positive real coefficients. The question of computing the least non-negative solution of a given MSPE Xโƒ—=fโƒ—(Xโƒ—)\vec X = \vec f(\vec X) arises naturally in the analysis of stochastic models such as stochastic context-free grammars, probabilistic pushdown automata, and back-button processes. Etessami and Yannakakis have recently adapted Newton's iterative method to MSPEs. In a previous paper we have proved the existence of a threshold kfโƒ—k_{\vec f} for strongly connected MSPEs, such that after kfโƒ—k_{\vec f} iterations of Newton's method each new iteration computes at least 1 new bit of the solution. However, the proof was purely existential. In this paper we give an upper bound for kfโƒ—k_{\vec f} as a function of the minimal component of the least fixed-point ฮผfโƒ—\mu\vec f of fโƒ—(Xโƒ—)\vec f(\vec X). Using this result we show that kfโƒ—k_{\vec f} is at most single exponential resp. linear for strongly connected MSPEs derived from probabilistic pushdown automata resp. from back-button processes. Further, we prove the existence of a threshold for arbitrary MSPEs after which each new iteration computes at least 1/w2h1/w2^h new bits of the solution, where ww and hh are the width and height of the DAG of strongly connected components.Comment: version 2 deposited February 29, after the end of the STACS conference. Two minor mistakes correcte

    Computing the Least Fixed Point of Positive Polynomial Systems

    Full text link
    We consider equation systems of the form X_1 = f_1(X_1, ..., X_n), ..., X_n = f_n(X_1, ..., X_n) where f_1, ..., f_n are polynomials with positive real coefficients. In vector form we denote such an equation system by X = f(X) and call f a system of positive polynomials, short SPP. Equation systems of this kind appear naturally in the analysis of stochastic models like stochastic context-free grammars (with numerous applications to natural language processing and computational biology), probabilistic programs with procedures, web-surfing models with back buttons, and branching processes. The least nonnegative solution mu f of an SPP equation X = f(X) is of central interest for these models. Etessami and Yannakakis have suggested a particular version of Newton's method to approximate mu f. We extend a result of Etessami and Yannakakis and show that Newton's method starting at 0 always converges to mu f. We obtain lower bounds on the convergence speed of the method. For so-called strongly connected SPPs we prove the existence of a threshold k_f such that for every i >= 0 the (k_f+i)-th iteration of Newton's method has at least i valid bits of mu f. The proof yields an explicit bound for k_f depending only on syntactic parameters of f. We further show that for arbitrary SPP equations Newton's method still converges linearly: there are k_f>=0 and alpha_f>0 such that for every i>=0 the (k_f+alpha_f i)-th iteration of Newton's method has at least i valid bits of mu f. The proof yields an explicit bound for alpha_f; the bound is exponential in the number of equations, but we also show that it is essentially optimal. Constructing a bound for k_f is still an open problem. Finally, we also provide a geometric interpretation of Newton's method for SPPs.Comment: This is a technical report that goes along with an article to appear in SIAM Journal on Computing

    Computing the Longest Common Prefix of a Context-free Language in Polynomial Time

    Get PDF
    We present two structural results concerning the longest common prefixes of non-empty languages. First, we show that the longest common prefix of the language generated by a context-free grammar of size N equals the longest common prefix of the same grammar where the heights of the derivation trees are bounded by 4N. Second, we show that each non-empty language L has a representative subset of at most three elements which behaves like L w.r.t. the longest common prefix as well as w.r.t. longest common prefixes of L after unions or concatenations with arbitrary other languages. From that, we conclude that the longest common prefix, and thus the longest common suffix, of a context-free language can be computed in polynomial time

    Runtime Monitoring DNN-Based Perception

    Full text link
    Deep neural networks (DNNs) are instrumental in realizing complex perception systems. As many of these applications are safety-critical by design, engineering rigor is required to ensure that the functional insufficiency of the DNN-based perception is not the source of harm. In addition to conventional static verification and testing techniques employed during the design phase, there is a need for runtime verification techniques that can detect critical events, diagnose issues, and even enforce requirements. This tutorial aims to provide readers with a glimpse of techniques proposed in the literature. We start with classical methods proposed in the machine learning community, then highlight a few techniques proposed by the formal methods community. While we surely can observe similarities in the design of monitors, how the decision boundaries are created vary between the two communities. We conclude by highlighting the need to rigorously design monitors, where data availability outside the operational domain plays an important role

    Accuracy, recording interference, and articulatory quality of headsets for ultrasound recordings

    Get PDF
    Abstract In this paper we evaluate the accuracy, recording interference, and articulatory quality of two different ultrasound probe stabilization headsets: a metallic Ultrasound Stabilisation Headset (USH) and UltraFit, a recently developed headset that is 3D printed in Nylon. To evaluate accuracy, we recorded three native speakers of German with different head sizes using an optical marker tracking system that provides sub-millimeter tracking accuracy (NaturalPoint OptiTrack Expression). The speakers had to read C1V1C2V1/2 non-words (to diminish lexical influences) in three conditions: wearing the USH headset, wearing the UltraFit headset, and without a headset. To estimate the relative headset movement, we measured the movement between tracked points on the probe, headset, and speaker's nose. By also tracking visual marker points on the speaker's lip and chin, we compared the movement of the outer articulators with and without a headset and, thereby, measured how the headsets interfere with the articulatory space of the speaker. Additionally, we computed the differences in tongue profiles at the acoustic midpoint of V1 under the three conditions and evaluated the articulatory recording quality with a distance index and an area index. In the final evaluation, we also compared formant measurements of recordings with and without headsets. With this objective evaluation we provide a systematic analysis of different headsets for Ultrasound Tongue Imaging (UTI) and also contribute to the discussion of using UTI stabilization headsets for recording natural speech. We show that both headsets have a similar accuracy, with the USH performing slightly better overall but introducing the largest error for one speaker, and that the UltraFit headset shows more flexibility during recordings. Each headset influences the lip opening differently. Concerning the tongue movement, there are no significant differences between different sessions, showing the stability of both headsets during the recordings. Acoustic analysis of formant differences in vowels revealed that the USH headset has a larger influence on formant production than the UltraFit headset

    Parikh's Theorem: A simple and direct automaton construction

    Full text link
    Parikh's theorem states that the Parikh image of a context-free language is semilinear or, equivalently, that every context-free language has the same Parikh image as some regular language. We present a very simple construction that, given a context-free grammar, produces a finite automaton recognizing such a regular language.Comment: 12 pages, 3 figure
    corecore