117 research outputs found

    Matrix product representation and synthesis for random vectors: Insight from statistical physics

    Full text link
    Inspired from modern out-of-equilibrium statistical physics models, a matrix product based framework permits the formal definition of random vectors (and random time series) whose desired joint distributions are a priori prescribed. Its key feature consists of preserving the writing of the joint distribution as the simple product structure it has under independence, while inputing controlled dependencies amongst components: This is obtained by replacing the product of distributions by a product of matrices of distributions. The statistical properties stemming from this construction are studied theoretically: The landscape of the attainable dependence structure is thoroughly depicted and a stationarity condition for time series is notably obtained. The remapping of this framework onto that of Hidden Markov Models enables us to devise an efficient and accurate practical synthesis procedure. A design procedure is also described permitting the tuning of model parameters to attain targeted properties. Pedagogical well-chosen examples of times series and multivariate vectors aim at illustrating the power and versatility of the proposed approach and at showing how targeted statistical properties can be actually prescribed.Comment: 10 pages, 4 figures, submitted to IEEE Transactions on Signal Processin

    Type I and Type II Fractional Brownian Motions: a Reconsideration

    Get PDF
    The so-called type I and type II fractional Brownian motions are limit distributions associated with the fractional integration model in which pre-sample shocks are either included in the lag structure, or suppressed. There can be substantial differences between the distributions of these two processes and of functionals derived from them, so that it becomes an important issue to decide which model to use as a basis for inference. Alternative methods for simulating the type I case are contrasted, and for models close to the nonstationarity boundary, truncating infinite sums is shown to result in a significant distortion of the distribution. A simple simulation method that overcomes this problem is described and implemented. The approach also has implications for the estimation of type I ARFIMA models, and a new conditional ML estimator is proposed, using the annual Nile minima series for illustration.Fractional Brownian motion, long memory, ARFIMA, simulation.

    Smoothing Windows for the Synthesis of Gaussian Stationary Random Fields Using Circulant Matrix Embedding

    Get PDF
    When generating Gaussian stationary random fields, a standard method based on circulant matrix embedding usually fails because some of the associated eigenvalues are negative. The eigenvalues can be shown to be nonnegative in the limit of increasing sample size. Computationally feasible large sample sizes, however, rarely lead to nonnegative eigenvalues. Another solution is to extend suitably the covariance function of interest so that the eigenvalues of the embedded circulant matrix become nonnegative in theory. Though such extensions have been found for a number of examples of stationary fields, the method depends on nontrivial constructions in specific cases. In this work, the embedded circulant matrix is smoothed at the boundary by using a cutoff window or overlapping windows over a transition region. The windows are not specific to particular examples of stationary fields. The resulting method modifies the standard circulant embedding, and is easy to use. It is shown that this straightforward approach works for many examples of interest, with the overlapping windows performing consistently better. The method even outperforms in the cases where extending covariance leads to nonnegative eigenvalues in theory, in the sense that the transition region is considerably smaller. The Matlab code implementing the method is publicly available at www.hermir.org

    Studies in Multidimensional Stochastic Processes: Multivariate Long-Range Dependence and Synthesis of Gaussian Random Fields

    Get PDF
    This thesis is concerned with the study of multidimensional stochastic processes with special dependence structures. It is comprised of 3 parts. The first two parts concern multivariate long-range dependent time series. These are stationary multivariate time series exhibiting long-range dependence in the sense that the impact of past values of the series to the future ones dies out slowly with the increasing lag. In contrast to the univariate case, where long-range dependent time series are well understood and applied across a number of research areas such as Economics, Finance, Computer Networks, Physics, Climate Sciences and many others, the study of multivariate long-range dependent time series has not matured yet. This thesis sets proper theoretical foundations of such series and examines their statistical inference under novel models. The third part of the thesis is concerned with two-dimensional stationary Gaussian random fields. In particular, a fast algorithm is proposed for exact synthesis of such fields based on convex optimization and is shown to outperform existing approaches.Doctor of Philosoph

    Stochastic interpolation of sparsely sampled time series by a superstatistical random process and its synthesis in Fourier and wavelet space

    Full text link
    We present a novel method for stochastic interpolation of sparsely sampled time signals based on a superstatistical random process generated from a multivariate Gaussian scale mixture. In comparison to other stochastic interpolation methods such as Gaussian process regression, our method possesses strong multifractal properties and is thus applicable to a broad range of real-world time series, e.g. from solar wind or atmospheric turbulence. Furthermore, we provide a sampling algorithm in terms of a mixing procedure that consists of generating a 1 + 1-dimensional field u(t, {\xi}), where each Gaussian component u{\xi}(t) is synthesized with identical underlying noise but different covariance function C{\xi}(t,s) parameterized by a log-normally distributed parameter {\xi}. Due to the Gaussianity of each component u{\xi}(t), we can exploit standard sampling alogrithms such as Fourier or wavelet methods and, most importantly, methods to constrain the process on the sparse measurement points. The scale mixture u(t) is then initialized by assigning each point in time t a {\xi}(t) and therefore a specific value from u(t, {\xi}), where the time-dependent parameter {\xi}(t) follows a log-normal process with a large correlation time scale compared to the correlation time of u(t, {\xi}). We juxtapose Fourier and wavelet methods and show that a multiwavelet-based hierarchical approximation of the interpolating paths, which produce a sparse covariance structure, provide an adequate method to locally interpolate large and sparse datasets.Comment: 25 pages, 14 figure
    • …
    corecore