6,477 research outputs found

    Bayesian Nonstationary Spatial Modeling for Very Large Datasets

    Full text link
    With the proliferation of modern high-resolution measuring instruments mounted on satellites, planes, ground-based vehicles and monitoring stations, a need has arisen for statistical methods suitable for the analysis of large spatial datasets observed on large spatial domains. Statistical analyses of such datasets provide two main challenges: First, traditional spatial-statistical techniques are often unable to handle large numbers of observations in a computationally feasible way. Second, for large and heterogeneous spatial domains, it is often not appropriate to assume that a process of interest is stationary over the entire domain. We address the first challenge by using a model combining a low-rank component, which allows for flexible modeling of medium-to-long-range dependence via a set of spatial basis functions, with a tapered remainder component, which allows for modeling of local dependence using a compactly supported covariance function. Addressing the second challenge, we propose two extensions to this model that result in increased flexibility: First, the model is parameterized based on a nonstationary Matern covariance, where the parameters vary smoothly across space. Second, in our fully Bayesian model, all components and parameters are considered random, including the number, locations, and shapes of the basis functions used in the low-rank component. Using simulated data and a real-world dataset of high-resolution soil measurements, we show that both extensions can result in substantial improvements over the current state-of-the-art.Comment: 16 pages, 2 color figure

    A two level feedback system design to provide regulation reserve

    Get PDF
    Demand side management has gained increasing importance as the penetration of renewable energy grows. Based on a Markov jump process modelling of a group of thermostatic loads, this paper proposes a two level feedback system design be- tween the independent system operator (ISO) and the regulation service provider such that two objectives are achieved: (1) the ISO can optimally dispatch regulation signals to multiple providers in real time in order to reduce the requirement for expensive spinning reserves, and (2) each regulation provider can control its thermostatic loads to respond the ISO signal. It is also shown that the amount of regulation service that can be provided is implicitly restricted by a few fundamental parameters of the provider itself, such as the allowable set point choice and its thermal constant. An interesting finding is that the regulation provider’s ability to provide a large amount of long term accumulated regulation and short term signal tracking restrict each other. Simulation results are presented to verify and illustrate the performance of the proposed framework

    Probabilistic Forecasts of Volatility and its Risk Premia

    Get PDF
    The object of this paper is to produce distributional forecasts of physical volatility and its associated risk premia using a non-Gaussian, non-linear state space approach. Option and spot market information on the unobserved variance process is captured by using dual 'model-free' variance measures to define a bivariate observation equation in the state space model. The premium for diffusive variance risk is defined as linear in the latent variance (in the usual fashion) whilst the premium for jump variance risk is specified as a conditionally deterministic dynamic process, driven by a function of past measurements. The inferential approach adopted is Bayesian, implemented via a Markov chain Monte Carlo algorithm that caters for the multiple sources of non-linearity in the model and the bivariate measure. The method is applied to empirical spot and option price data for the S&P500 index over the 1999 to 2008 period, with conclusions drawn about investors' required compensation for variance risk during the recent financial turmoil. The accuracy of the probabilistic forecasts of the observable variance measures is demonstrated, and compared with that of forecasts yielded by more standard time series models. To illustrate the benefits of the approach, the posterior distribution is augmented by information on daily returns to produce Value at Risk predictions, as well as being used to yield forecasts of the prices of derivatives on volatility itself. Linking the variance risk premia to the risk aversion parameter in a representative agent model, probabilistic forecasts of relative risk aversion are also produced.Volatility Forecasting; Non-linear State Space Models; Non-parametric Variance Measures; Bayesian Markov Chain Monte Carlo; VIX Futures; Risk Aversion.

    A multi-resolution, non-parametric, Bayesian framework for identification of spatially-varying model parameters

    Full text link
    This paper proposes a hierarchical, multi-resolution framework for the identification of model parameters and their spatially variability from noisy measurements of the response or output. Such parameters are frequently encountered in PDE-based models and correspond to quantities such as density or pressure fields, elasto-plastic moduli and internal variables in solid mechanics, conductivity fields in heat diffusion problems, permeability fields in fluid flow through porous media etc. The proposed model has all the advantages of traditional Bayesian formulations such as the ability to produce measures of confidence for the inferences made and providing not only predictive estimates but also quantitative measures of the predictive uncertainty. In contrast to existing approaches it utilizes a parsimonious, non-parametric formulation that favors sparse representations and whose complexity can be determined from the data. The proposed framework in non-intrusive and makes use of a sequence of forward solvers operating at various resolutions. As a result, inexpensive, coarse solvers are used to identify the most salient features of the unknown field(s) which are subsequently enriched by invoking solvers operating at finer resolutions. This leads to significant computational savings particularly in problems involving computationally demanding forward models but also improvements in accuracy. It is based on a novel, adaptive scheme based on Sequential Monte Carlo sampling which is embarrassingly parallelizable and circumvents issues with slow mixing encountered in Markov Chain Monte Carlo schemes

    Parameter estimation for stochastic hybrid model applied to urban traffic flow estimation

    Get PDF
    This study proposes a novel data-based approach for estimating the parameters of a stochastic hybrid model describing the traffic flow in an urban traffic network with signalized intersections. The model represents the evolution of the traffic flow rate, measuring the number of vehicles passing a given location per time unit. This traffic flow rate is described using a mode-dependent first-order autoregressive (AR) stochastic process. The parameters of the AR process take different values depending on the mode of traffic operation – free flowing, congested or faulty – making this a hybrid stochastic process. Mode switching occurs according to a first-order Markov chain. This study proposes an expectation-maximization (EM) technique for estimating the transition matrix of this Markovian mode process and the parameters of the AR models for each mode. The technique is applied to actual traffic flow data from the city of Jakarta, Indonesia. The model thus obtained is validated by using the smoothed inference algorithms and an online particle filter. The authors also develop an EM parameter estimation that, in combination with a time-window shift technique, can be useful and practical for periodically updating the parameters of hybrid model leading to an adaptive traffic flow state estimator

    Learning and Designing Stochastic Processes from Logical Constraints

    Get PDF
    Stochastic processes offer a flexible mathematical formalism to model and reason about systems. Most analysis tools, however, start from the premises that models are fully specified, so that any parameters controlling the system's dynamics must be known exactly. As this is seldom the case, many methods have been devised over the last decade to infer (learn) such parameters from observations of the state of the system. In this paper, we depart from this approach by assuming that our observations are {\it qualitative} properties encoded as satisfaction of linear temporal logic formulae, as opposed to quantitative observations of the state of the system. An important feature of this approach is that it unifies naturally the system identification and the system design problems, where the properties, instead of observations, represent requirements to be satisfied. We develop a principled statistical estimation procedure based on maximising the likelihood of the system's parameters, using recent ideas from statistical machine learning. We demonstrate the efficacy and broad applicability of our method on a range of simple but non-trivial examples, including rumour spreading in social networks and hybrid models of gene regulation
    • …
    corecore