72,211 research outputs found

    Enhanced optimization of high order concentrated matrix-exponential distributions

    Get PDF

    Exponential families of mixed Poisson distributions

    Get PDF
    If I=(I1,…,Id) is a random variable on [0,∞)d with distribution μ(dλ1,…,dλd), the mixed Poisson distribution MP(μ) on View the MathML source is the distribution of (N1(I1),…,Nd(Id)) where N1,…,Nd are ordinary independent Poisson processes which are also independent of I. The paper proves that if F is a natural exponential family on [0,∞)d then MP(F) is also a natural exponential family if and only if a generating probability of F is the distribution of v0+v1Y1+cdots, three dots, centered+vqYq for some qless-than-or-equals, slantd, for some vectors v0,…,vq of [0,∞)d with disjoint supports and for independent standard real gamma random variables Y1,…,Yq

    Reduction of Markov chains with two-time-scale state transitions

    Full text link
    In this paper, we consider a general class of two-time-scale Markov chains whose transition rate matrix depends on a parameter λ>0\lambda>0. We assume that some transition rates of the Markov chain will tend to infinity as λ→∞\lambda\rightarrow\infty. We divide the state space of the Markov chain XX into a fast state space and a slow state space and define a reduced chain YY on the slow state space. Our main result is that the distribution of the original chain XX will converge in total variation distance to that of the reduced chain YY uniformly in time tt as λ→∞\lambda\rightarrow\infty.Comment: 30 pages, 3 figures; Stochastics: An International Journal of Probability and Stochastic Processes, 201

    Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification

    Full text link
    Gaussian processes are a natural way of defining prior distributions over functions of one or more input variables. In a simple nonparametric regression problem, where such a function gives the mean of a Gaussian distribution for an observed response, a Gaussian process model can easily be implemented using matrix computations that are feasible for datasets of up to about a thousand cases. Hyperparameters that define the covariance function of the Gaussian process can be sampled using Markov chain methods. Regression models where the noise has a t distribution and logistic or probit models for classification applications can be implemented by sampling as well for latent values underlying the observations. Software is now available that implements these methods using covariance functions with hierarchical parameterizations. Models defined in this way can discover high-level properties of the data, such as which inputs are relevant to predicting the response
    • …
    corecore