1,204 research outputs found

    A New Proof Rule for Almost-Sure Termination

    Get PDF
    An important question for a probabilistic program is whether the probability mass of all its diverging runs is zero, that is that it terminates "almost surely". Proving that can be hard, and this paper presents a new method for doing so; it is expressed in a program logic, and so applies directly to source code. The programs may contain both probabilistic- and demonic choice, and the probabilistic choices may depend on the current state. As do other researchers, we use variant functions (a.k.a. "super-martingales") that are real-valued and probabilistically might decrease on each loop iteration; but our key innovation is that the amount as well as the probability of the decrease are parametric. We prove the soundness of the new rule, indicate where its applicability goes beyond existing rules, and explain its connection to classical results on denumerable (non-demonic) Markov chains.Comment: V1 to appear in PoPL18. This version collects some existing text into new example subsection 5.5 and adds a new example 5.6 and makes further remarks about uncountable branching. The new example 5.6 relates to work on lexicographic termination methods, also to appear in PoPL18 [Agrawal et al, 2018

    Modelling and analysis of Markov reward automata

    Get PDF
    Costs and rewards are important ingredients for many types of systems, modelling critical aspects like energy consumption, task completion, repair costs, and memory usage. This paper introduces Markov reward automata, an extension of Markov automata that allows the modelling of systems incorporating rewards (or costs) in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Rewards come in two flavours: action rewards, acquired instantaneously when taking a transition; and state rewards, acquired while residing in a state. We present algorithms to optimise three reward functions: the expected cumulative reward until a goal is reached, the expected cumulative reward until a certain time bound, and the long-run average reward. We have implemented these algorithms in the SCOOP/IMCA tool chain and show their feasibility via several case studies

    Dynamically consistent Choquet random walk and real investments

    Get PDF
    In the real investments literature, the investigated cash flow is assumed to follow some known stochastic process (e.g. Brownian motion) and the criterion to decide between investments is the discounted utility of their cash flows. However, for most new investments the investor may be ambiguous about the representation of uncertainty. In order to take such ambiguity into account, we refer to a discounted Choquet expected utility in our model. In such a setting some problems are to dealt with: dynamical consistency, here it is obtained in a recursive model by a weakened version of the axiom. Mimicking the Brownian motion as the limit of a random walk for the investment payoff process, we describe the latter as a binomial tree with capacities instead of exact probabilities on its branches and show what are its properties at the limit. We show that most results in the real investments literature are tractable in this enlarged setting but leave more room to ambiguity as both the mean and the variance of the underlying stochastic process are modified in our ambiguous model.

    Dynamically consistent Choquet random walk and real investments

    Get PDF
    In the real investments literature, the investigated cash flow is assumed to follow some known stochastic process (e.g. Brownian motion) and the criterion to decide between investments is the discounted utility of their cash flows. However, for most new investments the investor may be ambiguous about the representation of uncertainty. In order to take such ambiguity into account, we refer to a discounted Choquet expected utility in our model. In such a setting some problems are to dealt with: dynamical consistency, here it is obtained in a recursive model by a weakened version of the axiom. Mimicking the Brownian motion as the limit of a random walk for the investment payoff process, we describe the latter as a binomial tree with capacities instead of exact probabilities on its branches and show what are its properties at the limit.  We show that most results in the real investments literature are tractable in this enlarged setting but leave more room to ambiguity as both the mean and the variance of the underlying stochastic process are modified in our ambiguous modelChoquet integrals; conditional Choquet integrals; random walk; Brownian motion; real options; optimal portfolio

    Bayesian Gaussian Process Models: PAC-Bayesian Generalisation Error Bounds and Sparse Approximations

    Get PDF
    Non-parametric models and techniques enjoy a growing popularity in the field of machine learning, and among these Bayesian inference for Gaussian process (GP) models has recently received significant attention. We feel that GP priors should be part of the standard toolbox for constructing models relevant to machine learning in the same way as parametric linear models are, and the results in this thesis help to remove some obstacles on the way towards this goal. In the first main chapter, we provide a distribution-free finite sample bound on the difference between generalisation and empirical (training) error for GP classification methods. While the general theorem (the PAC-Bayesian bound) is not new, we give a much simplified and somewhat generalised derivation and point out the underlying core technique (convex duality) explicitly. Furthermore, the application to GP models is novel (to our knowledge). A central feature of this bound is that its quality depends crucially on task knowledge being encoded faithfully in the model and prior distributions, so there is a mutual benefit between a sharp theoretical guarantee and empirically well-established statistical practices. Extensive simulations on real-world classification tasks indicate an impressive tightness of the bound, in spite of the fact that many previous bounds for related kernel machines fail to give non-trivial guarantees in this practically relevant regime. In the second main chapter, sparse approximations are developed to address the problem of the unfavourable scaling of most GP techniques with large training sets. Due to its high importance in practice, this problem has received a lot of attention recently. We demonstrate the tractability and usefulness of simple greedy forward selection with information-theoretic criteria previously used in active learning (or sequential design) and develop generic schemes for automatic model selection with many (hyper)parameters. We suggest two new generic schemes and evaluate some of their variants on large real-world classification and regression tasks. These schemes and their underlying principles (which are clearly stated and analysed) can be applied to obtain sparse approximations for a wide regime of GP models far beyond the special cases we studied here

    Structural characterization of decomposition in rate-insensitive stochastic Petri nets

    Get PDF
    This paper focuses on stochastic Petri nets that have an equilibrium distribution that is a product form over the number of tokens at the places. We formulate a decomposition result for the class of nets that have a product form solution irrespective of the values of the transition rates. These nets where algebraically characterized by Haddad et al.~as SΠ2S\Pi^2 nets. By providing an intuitive interpretation of this algebraical characterization, and associating state machines to sets of TT-invariants, we obtain a one-to-one correspondence between the marking of the original places and the places of the added state machines. This enables us to show that the subclass of stochastic Petri nets under study can be decomposed into subnets that are identified by sets of its TT-invariants
    • 

    corecore