438 research outputs found

    Modeling and Forecasting of Realized Covariance Matrices of Asset Returns using State-Space Models

    Get PDF
    This thesis comprises three self-contained essays on the modeling and prediction of realized covariance matrices of asset returns using state-space models

    Variational methods in simultaneous optimum interpolation and initialization

    Get PDF
    The duality between optimum interpolation and variational objective analysis, is reviewed. This duality is used to set up a variational approach to objective analysis which uses prior information concerning the atmospheric spectral energy distribution, in the variational problem. In the wind analysis example, the wind field is partitioned into divergent and nondivergent parts, and a control parameter governing the relative energy in the two parts is estimated from the observational data being analyzed by generalized cross validation, along with a bandwidth parameter. A variational approach to combining objective analysis and initialization in a single step is proposed. In a simple example of this approach, data, forecast, and prior information concerning atmospheric energy distribution is combined into a single variational problem. This problem has (at least) one bandwidth parameter, one partitioning parameter governing the relative energy in fast slow modes, and one parameter governing the relative weight to be given to observational and forecast data

    Point spread function approximation of high rank Hessians with locally supported non-negative integral kernels

    Full text link
    We present an efficient matrix-free point spread function (PSF) method for approximating operators that have locally supported non-negative integral kernels. The method computes impulse responses of the operator at scattered points, and interpolates these impulse responses to approximate integral kernel entries. Impulse responses are computed by applying the operator to Dirac comb batches of point sources, which are chosen by solving an ellipsoid packing problem. Evaluation of kernel entries allows us to construct a hierarchical matrix (H-matrix) approximation of the operator. Further matrix computations are performed with H-matrix methods. We use the method to build preconditioners for the Hessian operator in two inverse problems governed by partial differential equations (PDEs): inversion for the basal friction coefficient in an ice sheet flow problem and for the initial condition in an advective-diffusive transport problem. While for many ill-posed inverse problems the Hessian of the data misfit term exhibits a low rank structure, and hence a low rank approximation is suitable, for many problems of practical interest the numerical rank of the Hessian is still large. But Hessian impulse responses typically become more local as the numerical rank increases, which benefits the PSF method. Numerical results reveal that the PSF preconditioner clusters the spectrum of the preconditioned Hessian near one, yielding roughly 5x-10x reductions in the required number of PDE solves, as compared to regularization preconditioning and no preconditioning. We also present a numerical study for the influence of various parameters (that control the shape of the impulse responses) on the effectiveness of the advection-diffusion Hessian approximation. The results show that the PSF-based preconditioners are able to form good approximations of high-rank Hessians using a small number of operator applications

    Convergence and Error Propagation Results on a Linear Iterative Unfolding Method

    Get PDF
    Unfolding problems often arise in the context of statistical data analysis. Such problematics occur when the probability distribution of a physical quantity is to be measured, but it is randomized (smeared) by some well understood process, such as a non-ideal detector response or a well described physical phenomenon. In such case it is said that the original probability distribution of interest is folded by a known response function. The reconstruction of the original probability distribution from the measured one is called unfolding. That technically involves evaluation of the non-bounded inverse of an integral operator over the space of L^1 functions, which is known to be an ill-posed problem. For the pertinent regularized operator inversion, we propose a linear iterative formula and provide proof of convergence in a probability theory context. Furthermore, we provide formulae for error estimates at finite iteration stopping order which are of utmost importance in practical applications: the approximation error, the propagated statistical error, and the propagated systematic error can be quantified. The arguments are based on the Riesz-Thorin theorem mapping the original L^1 problem to L^2 space, and subsequent application of ordinary L^2 spectral theory of operators. A library implementation in C of the algorithm along with corresponding error propagation is also provided. A numerical example also illustrates the method in operation.Comment: 27 pages, 1 figur

    A Geometric Variational Approach to Bayesian Inference

    Get PDF
    We propose a novel Riemannian geometric framework for variational inference in Bayesian models based on the nonparametric Fisher-Rao metric on the manifold of probability density functions. Under the square-root density representation, the manifold can be identified with the positive orthant of the unit hypersphere in L2, and the Fisher-Rao metric reduces to the standard L2 metric. Exploiting such a Riemannian structure, we formulate the task of approximating the posterior distribution as a variational problem on the hypersphere based on the alpha-divergence. This provides a tighter lower bound on the marginal distribution when compared to, and a corresponding upper bound unavailable with, approaches based on the Kullback-Leibler divergence. We propose a novel gradient-based algorithm for the variational problem based on Frechet derivative operators motivated by the geometry of the Hilbert sphere, and examine its properties. Through simulations and real-data applications, we demonstrate the utility of the proposed geometric framework and algorithm on several Bayesian models
    corecore