40,053 research outputs found

    Lipschitz Robustness of Finite-state Transducers

    Get PDF
    We investigate the problem of checking if a finite-state transducer is robust to uncertainty in its input. Our notion of robustness is based on the analytic notion of Lipschitz continuity --- a transducer is K-(Lipschitz) robust if the perturbation in its output is at most K times the perturbation in its input. We quantify input and output perturbation using similarity functions. We show that K-robustness is undecidable even for deterministic transducers. We identify a class of functional transducers, which admits a polynomial time automata-theoretic decision procedure for K-robustness. This class includes Mealy machines and functional letter-to-letter transducers. We also study K-robustness of nondeterministic transducers. Since a nondeterministic transducer generates a set of output words for each input word, we quantify output perturbation using set-similarity functions. We show that K-robustness of nondeterministic transducers is undecidable, even for letter-to-letter transducers. We identify a class of set-similarity functions which admit decidable K-robustness of letter-to-letter transducers.Comment: In FSTTCS 201

    Counterfactual Sensitivity and Robustness

    Full text link
    Researchers frequently make parametric assumptions about the distribution of unobservables when formulating structural models. Such assumptions are typically motived by computational convenience rather than economic theory and are often untestable. Counterfactuals can be particularly sensitive to such assumptions, threatening the credibility of structural modeling exercises. To address this issue, we leverage insights from the literature on ambiguity and model uncertainty to propose a tractable econometric framework for characterizing the sensitivity of counterfactuals with respect to a researcher's assumptions about the distribution of unobservables in a class of structural models. In particular, we show how to construct the smallest and largest values of the counterfactual as the distribution of unobservables spans nonparametric neighborhoods of the researcher's assumed specification while other `structural' features of the model, e.g. equilibrium conditions, are maintained. Our methods are computationally simple to implement, with the nuisance distribution effectively profiled out via a low-dimensional convex program. Our procedure delivers sharp bounds for the identified set of counterfactuals (i.e. without parametric assumptions about the distribution of unobservables) as the neighborhoods become large. Over small neighborhoods, we relate our procedure to a measure of local sensitivity which is further characterized using an influence function representation. We provide a suitable sampling theory for plug-in estimators and apply our procedure to models of strategic interaction and dynamic discrete choice

    Poverty, Inequality and Stochastic Dominance, Theory and Practice: The Case of Burkina Faso

    Get PDF
    In this paper we provide a set of rules that can be used to check poverty or inequality dominance using discrete data. Existing theoretical rules assume continuity in incomes or in percentiles of the population. In reality, with the usual household surveys, this continuity does not exist. However, such a discontinuity can be exploited to test for stochastic dominance. This paper also proposes stochastic dominance conditions that check for the statistical robustness of the inferred rankings. The methodology of this paper is illustrated using Burkina Faso's household surveys for the years of 1994 and 1998.Stochastic Dominance, Poverty, Inequality

    Proving Expected Sensitivity of Probabilistic Programs with Randomized Variable-Dependent Termination Time

    Get PDF
    The notion of program sensitivity (aka Lipschitz continuity) specifies that changes in the program input result in proportional changes to the program output. For probabilistic programs the notion is naturally extended to expected sensitivity. A previous approach develops a relational program logic framework for proving expected sensitivity of probabilistic while loops, where the number of iterations is fixed and bounded. In this work, we consider probabilistic while loops where the number of iterations is not fixed, but randomized and depends on the initial input values. We present a sound approach for proving expected sensitivity of such programs. Our sound approach is martingale-based and can be automated through existing martingale-synthesis algorithms. Furthermore, our approach is compositional for sequential composition of while loops under a mild side condition. We demonstrate the effectiveness of our approach on several classical examples from Gambler's Ruin, stochastic hybrid systems and stochastic gradient descent. We also present experimental results showing that our automated approach can handle various probabilistic programs in the literature

    Quantitative Robustness Analysis of Quantum Programs (Extended Version)

    Full text link
    Quantum computation is a topic of significant recent interest, with practical advances coming from both research and industry. A major challenge in quantum programming is dealing with errors (quantum noise) during execution. Because quantum resources (e.g., qubits) are scarce, classical error correction techniques applied at the level of the architecture are currently cost-prohibitive. But while this reality means that quantum programs are almost certain to have errors, there as yet exists no principled means to reason about erroneous behavior. This paper attempts to fill this gap by developing a semantics for erroneous quantum while-programs, as well as a logic for reasoning about them. This logic permits proving a property we have identified, called ϵ\epsilon-robustness, which characterizes possible "distance" between an ideal program and an erroneous one. We have proved the logic sound, and showed its utility on several case studies, notably: (1) analyzing the robustness of noisy versions of the quantum Bernoulli factory (QBF) and quantum walk (QW); (2) demonstrating the (in)effectiveness of different error correction schemes on single-qubit errors; and (3) analyzing the robustness of a fault-tolerant version of QBF.Comment: 34 pages, LaTeX; v2: fixed typo

    Control Barrier Function Based Quadratic Programs for Safety Critical Systems

    Get PDF
    Safety critical systems involve the tight coupling between potentially conflicting control objectives and safety constraints. As a means of creating a formal framework for controlling systems of this form, and with a view toward automotive applications, this paper develops a methodology that allows safety conditions -- expressed as control barrier functions -- to be unified with performance objectives -- expressed as control Lyapunov functions -- in the context of real-time optimization-based controllers. Safety conditions are specified in terms of forward invariance of a set, and are verified via two novel generalizations of barrier functions; in each case, the existence of a barrier function satisfying Lyapunov-like conditions implies forward invariance of the set, and the relationship between these two classes of barrier functions is characterized. In addition, each of these formulations yields a notion of control barrier function (CBF), providing inequality constraints in the control input that, when satisfied, again imply forward invariance of the set. Through these constructions, CBFs can naturally be unified with control Lyapunov functions (CLFs) in the context of a quadratic program (QP); this allows for the achievement of control objectives (represented by CLFs) subject to conditions on the admissible states of the system (represented by CBFs). The mediation of safety and performance through a QP is demonstrated on adaptive cruise control and lane keeping, two automotive control problems that present both safety and performance considerations coupled with actuator bounds
    • …
    corecore