1,555 research outputs found

    Perturbation realization, potentials, and sensitivity analysis of Markov processes

    Get PDF
    Abstract — Two fundamental concepts and quantities, realization factors and performance potentials, are introduced for Markov processes. The relations among these two quantities and the group inverse of the infinitesimal generator are studied. It is shown that the sensitivity of the steady-state performance with respect to the change of the infinitesimal generator can be easily calculated by using either of these three quantities and that these quantities can be estimated by analyzing a single sample path of a Markov process. Based on these results, algorithms for estimating performance sensitivities on a single sample path of a Markov process can be proposed. The potentials in this paper are defined through realization factors and are shown to be the same as those defined by Poisson equations. The results provide a uniform framework of perturbation realization for infinitesimal perturbation analysis (IPA) and non-IPA approaches to the sensitivity analysis of steady-state performance; they also provide a theoretical background for the PA algorithms developed in recent years. Index Terms—Perturbation analysis, Poisson equations, samplepath analysis

    Newton based Stochastic Optimization using q-Gaussian Smoothed Functional Algorithms

    Full text link
    We present the first q-Gaussian smoothed functional (SF) estimator of the Hessian and the first Newton-based stochastic optimization algorithm that estimates both the Hessian and the gradient of the objective function using q-Gaussian perturbations. Our algorithm requires only two system simulations (regardless of the parameter dimension) and estimates both the gradient and the Hessian at each update epoch using these. We also present a proof of convergence of the proposed algorithm. In a related recent work (Ghoshdastidar et al., 2013), we presented gradient SF algorithms based on the q-Gaussian perturbations. Our work extends prior work on smoothed functional algorithms by generalizing the class of perturbation distributions as most distributions reported in the literature for which SF algorithms are known to work and turn out to be special cases of the q-Gaussian distribution. Besides studying the convergence properties of our algorithm analytically, we also show the results of several numerical simulations on a model of a queuing network, that illustrate the significance of the proposed method. In particular, we observe that our algorithm performs better in most cases, over a wide range of q-values, in comparison to Newton SF algorithms with the Gaussian (Bhatnagar, 2007) and Cauchy perturbations, as well as the gradient q-Gaussian SF algorithms (Ghoshdastidar et al., 2013).Comment: This is a longer of version of the paper with the same title accepted in Automatic

    Efficient Sensitivity Analysis for Parametric Robust Markov Chains

    Get PDF
    We provide a novel method for sensitivity analysis of parametric robust Markov chains. These models incorporate parameters and sets of probability distributions to alleviate the often unrealistic assumption that precise probabilities are available. We measure sensitivity in terms of partial derivatives with respect to the uncertain transition probabilities regarding measures such as the expected reward. As our main contribution, we present an efficient method to compute these partial derivatives. To scale our approach to models with thousands of parameters, we present an extension of this method that selects the subset of kk parameters with the highest partial derivative. Our methods are based on linear programming and differentiating these programs around a given value for the parameters. The experiments show the applicability of our approach on models with over a million states and thousands of parameters. Moreover, we embed the results within an iterative learning scheme that profits from having access to a dedicated sensitivity analysis

    Efficient Sensitivity Analysis for Parametric Robust Markov Chains

    Full text link
    We provide a novel method for sensitivity analysis of parametric robust Markov chains. These models incorporate parameters and sets of probability distributions to alleviate the often unrealistic assumption that precise probabilities are available. We measure sensitivity in terms of partial derivatives with respect to the uncertain transition probabilities regarding measures such as the expected reward. As our main contribution, we present an efficient method to compute these partial derivatives. To scale our approach to models with thousands of parameters, we present an extension of this method that selects the subset of kk parameters with the highest partial derivative. Our methods are based on linear programming and differentiating these programs around a given value for the parameters. The experiments show the applicability of our approach on models with over a million states and thousands of parameters. Moreover, we embed the results within an iterative learning scheme that profits from having access to a dedicated sensitivity analysis.Comment: To be presented at CAV 202

    A comprehensive literature classification of simulation optimisation methods

    Get PDF
    Simulation Optimization (SO) provides a structured approach to the system design and configuration when analytical expressions for input/output relationships are unavailable. Several excellent surveys have been written on this topic. Each survey concentrates on only few classification criteria. This paper presents a literature survey with all classification criteria on techniques for SO according to the problem of characteristics such as shape of the response surface (global as compared to local optimization), objective functions (single or multiple objectives) and parameter spaces (discrete or continuous parameters). The survey focuses specifically on the SO problem that involves single per-formance measureSimulation Optimization, classification methods, literature survey

    Exact likelihood computation for nonlinear DSGE models with heteroskedastic innovations

    Get PDF
    Phenomena such as the Great Moderation have increased the attention of macro-economists towards models where shock processes are not (log-)normal. This paper studies a class of discrete-time rational expectations models where the variance of exogenous innovations is subject to stochastic regime shifts. We first show that, up to a second-order approximation using perturbation methods, regime switching in the variances has an impact only on the intercept coefficients of the decision rules. We then demonstrate how to derive the exact model likelihood for the second-order approximation of the solution when there are as many shocks as observable variables. We illustrate the applicability of the proposed solution and estimation methods in the case of a small DSGE model. JEL Classification: E0, C63DSGE Models, Regime switching, second-order approximation, time-varying volatility
    • …
    corecore