98,885 research outputs found

    Ioco theory for probabilistic automata

    Get PDF
    Model-based testing (MBT) is a well-known technology, which allows for automatic test case generation, execution and evaluation. To test non-functional properties, a number of test MBT frameworks have been developed to test systems with real-time, continuous behaviour, symbolic data and quantitative system aspects. Notably, a lot of these frameworks are based on Tretmans' classical input/output conformance (ioco) framework. However, a model-based test theory handling probabilistic behaviour does not exist yet. Probability plays a role in many different systems: unreliable communication channels, randomized algorithms and communication protocols, service level agreements pinning down up-time percentages, etc. Therefore, a probabilistic test theory is of great practical importance. We present the ingredients for a probabilistic variant of ioco and define the {\pi}oco relation, show that it conservatively extends ioco and define the concepts of test case, execution and evaluation

    UNDERSTANDING THE FUNCTIONAL CENTRAL LIMIT THEOREMS WITH SOME APPLICATIONS TO UNIT ROOT TESTING WITH STRUCTURAL CHANGE

    Get PDF
    This paper analyzes and employs two versions of the Functional Central Limit Theorem within the framework of a unit root with a structural break. Initial attention is focused on the probabilistic structure of the time series to be considered. Later, attention is placed on the asymptotic theory for nonstationary time series proposed by Phillips (1987a), which is applied by Perron (1989) to study the effects of an (assumed) exogenous structural break on the power of the augmented Dickey-Fuller test and by Zivot and Andrews (1992) to criticize the exogeneity assumption and propose a method for estimating an endogenous breakpoint. A systematic method for dealing with e¢ ciency issues is introduced by Perron and RodrÌguez (2003), which extends the Generalized Least Squares detrending approach due to Elliott, Rothenberg, and Stock (1996)Hypothesis Testing, Unit Root, Structural Break, Functional Central Limit Theorem, Weak Convergence, Wiener Process, Ornstein-Uhlenbeck Process

    Model-based testing of probabilistic systems

    Get PDF
    This work presents an executable model-based testing framework for probabilistic systems with non-determinism. We provide algorithms to automatically generate, execute and evaluate test cases from a probabilistic requirements specification. The framework connects input/output conformance-theory with hypothesis testing: our algorithms handle functional correctness, while statistical methods assess, if the frequencies observed during the test process correspond to the probabilities specified in the requirements. At the core of our work lies the conformance relation for probabilistic input/output conformance, enabling us to pin down exactly when an implementation should pass a test case. We establish the correctness of our framework alongside this relation as soundness and completeness; Soundness states that a correct implementation indeed passes a test suite, while completeness states that the framework is powerful enough to discover each deviation from a specification up to arbitrary precision for a sufficiently large sample size. The underlying models are probabilistic automata that allow invisible internal progress. We incorporate divergent systems into our framework by phrasing four rules that each well-formed system needs to adhere to. This enables us to treat divergence as the absence of output, or quiescence, which is a well-studied formalism in model-based testing. Lastly, we illustrate the application of our framework on three case studies

    A uniform framework for modelling nondeterministic, probabilistic, stochastic, or mixed processes and their behavioral equivalences

    Get PDF
    Labeled transition systems are typically used as behavioral models of concurrent processes, and the labeled transitions define the a one-step state-to-state reachability relation. This model can be made generalized by modifying the transition relation to associate a state reachability distribution, rather than a single target state, with any pair of source state and transition label. The state reachability distribution becomes a function mapping each possible target state to a value that expresses the degree of one-step reachability of that state. Values are taken from a preordered set equipped with a minimum that denotes unreachability. By selecting suitable preordered sets, the resulting model, called ULTraS from Uniform Labeled Transition System, can be specialized to capture well-known models of fully nondeterministic processes (LTS), fully probabilistic processes (ADTMC), fully stochastic processes (ACTMC), and of nondeterministic and probabilistic (MDP) or nondeterministic and stochastic (CTMDP) processes. This uniform treatment of different behavioral models extends to behavioral equivalences. These can be defined on ULTraS by relying on appropriate measure functions that expresses the degree of reachability of a set of states when performing single-step or multi-step computations. It is shown that the specializations of bisimulation, trace, and testing equivalences for the different classes of ULTraS coincide with the behavioral equivalences defined in the literature over traditional models

    Testing axioms for Quantum Mechanics on Probabilistic toy-theories

    Full text link
    In Ref. [1] one of the authors proposed postulates for axiomatizing Quantum Mechanics as a "fair operational framework", namely regarding the theory as a set of rules that allow the experimenter to predict future events on the basis of suitable tests, having local control and low experimental complexity. In addition to causality, the following postulates have been considered: PFAITH (existence of a pure preparationally faithful state), and FAITHE (existence of a faithful effect). These postulates have exhibited an unexpected theoretical power, excluding all known nonquantum probabilistic theories. Later in Ref. [2] in addition to causality and PFAITH, postulate LDISCR (local discriminability) and PURIFY (purifiability of all states) have been considered, narrowing the probabilistic theory to something very close to Quantum Mechanics. In the present paper we test the above postulates on some nonquantum probabilistic models. The first model, "the two-box world" is an extension of the Popescu-Rohrlich model, which achieves the greatest violation of the CHSH inequality compatible with the no-signaling principle. The second model "the two-clock world" is actually a full class of models, all having a disk as convex set of states for the local system. One of them corresponds to the "the two-rebit world", namely qubits with real Hilbert space. The third model--"the spin-factor"--is a sort of n-dimensional generalization of the clock. Finally the last model is "the classical probabilistic theory". We see how each model violates some of the proposed postulates, when and how teleportation can be achieved, and we analyze other interesting connections between these postulate violations, along with deep relations between the local and the non-local structures of the probabilistic theory.Comment: Submitted to QIP Special Issue on Foundations of Quantum Informatio

    Proofs of randomized algorithms in Coq

    Get PDF
    International audienceRandomized algorithms are widely used for finding efficiently approximated solutions to complex problems, for instance primality testing and for obtaining good average behavior. Proving properties of such algorithms requires subtle reasoning both on algorithmic and probabilistic aspects of programs. Thus, providing tools for the mechanization of reasoning is an important issue. This paper presents a new method for proving properties of randomized algorithms in a proof assistant based on higher-order logic. It is based on the monadic interpretation of randomized programs as probabilistic distributions. It does not require the definition of an operational semantics for the language nor the development of a complex formalization of measure theory. Instead it uses functional and algebraic properties of unit interval. Using this model, we show the validity of general rules for estimating the probability for a randomized algorithm to satisfy specified properties. This approach addresses only discrete distributions and gives rules for analysing general recursive functions. We apply this theory to the formal proof of a program implementing a Bernoulli distribution from a coin flip and to the (partial) termination of several programs. All the theories and results presented in this paper have been fully formalized and proved in the Coq proof assistant

    Uniform Labeled Transition Systems for Nondeterministic, Probabilistic, and Stochastic Process Calculi

    Get PDF
    Labeled transition systems are typically used to represent the behavior of nondeterministic processes, with labeled transitions defining a one-step state to-state reachability relation. This model has been recently made more general by modifying the transition relation in such a way that it associates with any source state and transition label a reachability distribution, i.e., a function mapping each possible target state to a value of some domain that expresses the degree of one-step reachability of that target state. In this extended abstract, we show how the resulting model, called ULTraS from Uniform Labeled Transition System, can be naturally used to give semantics to a fully nondeterministic, a fully probabilistic, and a fully stochastic variant of a CSP-like process language.Comment: In Proceedings PACO 2011, arXiv:1108.145
    corecore