1,174 research outputs found

    Explicit computations for some Markov modulated counting processes

    Full text link
    In this paper we present elementary computations for some Markov modulated counting processes, also called counting processes with regime switching. Regime switching has become an increasingly popular concept in many branches of science. In finance, for instance, one could identify the background process with the `state of the economy', to which asset prices react, or as an identification of the varying default rate of an obligor. The key feature of the counting processes in this paper is that their intensity processes are functions of a finite state Markov chain. This kind of processes can be used to model default events of some companies. Many quantities of interest in this paper, like conditional characteristic functions, can all be derived from conditional probabilities, which can, in principle, be analytically computed. We will also study limit results for models with rapid switching, which occur when inflating the intensity matrix of the Markov chain by a factor tending to infinity. The paper is largely expository in nature, with a didactic flavor

    Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models

    Get PDF
    Generalized Linear Models (GLMs) are an increasingly popular framework for modeling neural spike trains. They have been linked to the theory of stochastic point processes and researchers have used this relation to assess goodness-of-fit using methods from point-process theory, e.g. the time-rescaling theorem. However, high neural firing rates or coarse discretization lead to a breakdown of the assumptions necessary for this connection. Here, we show how goodness-of-fit tests from point-process theory can still be applied to GLMs by constructing equivalent surrogate point processes out of time-series observations. Furthermore, two additional tests based on thinning and complementing point processes are introduced. They augment the instruments available for checking model adequacy of point processes as well as discretized models.Comment: 9 pages, to appear in NIPS 2010 (Neural Information Processing Systems), corrected missing referenc

    A Definition of Non-Stationary Bandits

    Full text link
    Despite the subject of non-stationary bandit learning having attracted much recent attention, we have yet to identify a formal definition of non-stationarity that can consistently distinguish non-stationary bandits from stationary ones. Prior work has characterized non-stationary bandits as bandits for which the reward distribution changes over time. We demonstrate that this definition can ambiguously classify the same bandit as both stationary and non-stationary; this ambiguity arises in the existing definition's dependence on the latent sequence of reward distributions. Moreover, the definition has given rise to two widely used notions of regret: the dynamic regret and the weak regret. These notions are not indicative of qualitative agent performance in some bandits. Additionally, this definition of non-stationary bandits has led to the design of agents that explore excessively. We introduce a formal definition of non-stationary bandits that resolves these issues. Our new definition provides a unified approach, applicable seamlessly to both Bayesian and frequentist formulations of bandits. Furthermore, our definition ensures consistent classification of two bandits offering agents indistinguishable experiences, categorizing them as either both stationary or both non-stationary. This advancement provides a more robust framework for non-stationary bandit learning

    Unsupervised empirical Bayesian multiple testing with external covariates

    Full text link
    In an empirical Bayesian setting, we provide a new multiple testing method, useful when an additional covariate is available, that influences the probability of each null hypothesis being true. We measure the posterior significance of each test conditionally on the covariate and the data, leading to greater power. Using covariate-based prior information in an unsupervised fashion, we produce a list of significant hypotheses which differs in length and order from the list obtained by methods not taking covariate-information into account. Covariate-modulated posterior probabilities of each null hypothesis are estimated using a fast approximate algorithm. The new method is applied to expression quantitative trait loci (eQTL) data.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS158 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Non-Stationary Bandit Learning via Predictive Sampling

    Full text link
    Thompson sampling has proven effective across a wide range of stationary bandit environments. However, as we demonstrate in this paper, it can perform poorly when applied to non-stationary environments. We show that such failures are attributed to the fact that, when exploring, the algorithm does not differentiate actions based on how quickly the information acquired loses its usefulness due to non-stationarity. Building upon this insight, we propose predictive sampling, an algorithm that deprioritizes acquiring information that quickly loses usefulness. Theoretical guarantee on the performance of predictive sampling is established through a Bayesian regret bound. We provide versions of predictive sampling for which computations tractably scale to complex bandit environments of practical interest. Through numerical simulations, we demonstrate that predictive sampling outperforms Thompson sampling in all non-stationary environments examined

    An Exact Auxiliary Variable Gibbs Sampler for a Class of Diffusions

    Full text link
    Stochastic differential equations (SDEs) or diffusions are continuous-valued continuous-time stochastic processes widely used in the applied and mathematical sciences. Simulating paths from these processes is usually an intractable problem, and typically involves time-discretization approximations. We propose an exact Markov chain Monte Carlo sampling algorithm that involves no such time-discretization error. Our sampler is applicable to the problem of prior simulation from an SDE, posterior simulation conditioned on noisy observations, as well as parameter inference given noisy observations. Our work recasts an existing rejection sampling algorithm for a class of diffusions as a latent variable model, and then derives an auxiliary variable Gibbs sampling algorithm that targets the associated joint distribution. At a high level, the resulting algorithm involves two steps: simulating a random grid of times from an inhomogeneous Poisson process, and updating the SDE trajectory conditioned on this grid. Our work allows the vast literature of Monte Carlo sampling algorithms from the Gaussian process literature to be brought to bear to applications involving diffusions. We study our method on synthetic and real datasets, where we demonstrate superior performance over competing methods.Comment: 37 pages, 13 figure

    Lower Bounds on Exponential Moments of the Quadratic Error in Parameter Estimation

    Full text link
    Considering the problem of risk-sensitive parameter estimation, we propose a fairly wide family of lower bounds on the exponential moments of the quadratic error, both in the Bayesian and the non--Bayesian regime. This family of bounds, which is based on a change of measures, offers considerable freedom in the choice of the reference measure, and our efforts are devoted to explore this freedom to a certain extent. Our focus is mostly on signal models that are relevant to communication problems, namely, models of a parameter-dependent signal (modulated signal) corrupted by additive white Gaussian noise, but the methodology proposed is also applicable to other types of parametric families, such as models of linear systems driven by random input signals (white noise, in most cases), and others. In addition to the well known motivations of the risk-sensitive cost function (i.e., the exponential quadratic cost function), which is most notably, the robustness to model uncertainty, we also view this cost function as a tool for studying fundamental limits concerning the tail behavior of the estimation error. Another interesting aspect, that we demonstrate in a certain parametric model, is that the risk-sensitive cost function may be subjected to phase transitions, owing to some analogies with statistical mechanics.Comment: 28 pages; 4 figures; submitted for publicatio
    • …
    corecore