1,994 research outputs found
Matrix Infinitely Divisible Series: Tail Inequalities and Applications in Optimization
In this paper, we study tail inequalities of the largest eigenvalue of a
matrix infinitely divisible (i.d.) series, which is a finite sum of fixed
matrices weighted by i.d. random variables. We obtain several types of tail
inequalities, including Bennett-type and Bernstein-type inequalities. This
allows us to further bound the expectation of the spectral norm of a matrix
i.d. series. Moreover, by developing a new lower-bound function for
that appears in the Bennett-type inequality, we derive
a tighter tail inequality of the largest eigenvalue of the matrix i.d. series
than the Bernstein-type inequality when the matrix dimension is high. The
resulting lower-bound function is of independent interest and can improve any
Bennett-type concentration inequality that involves the function . The
class of i.d. probability distributions is large and includes Gaussian and
Poisson distributions, among many others. Therefore, our results encompass the
existing work \cite{tropp2012user} on matrix Gaussian series as a special case.
Lastly, we show that the tail inequalities of a matrix i.d. series have
applications in several optimization problems including the chance constrained
optimization problem and the quadratic optimization problem with orthogonality
constraints.Comment: Comments Welcome
Autoregressive Kernels For Time Series
We propose in this work a new family of kernels for variable-length time
series. Our work builds upon the vector autoregressive (VAR) model for
multivariate stochastic processes: given a multivariate time series x, we
consider the likelihood function p_{\theta}(x) of different parameters \theta
in the VAR model as features to describe x. To compare two time series x and
x', we form the product of their features p_{\theta}(x) p_{\theta}(x') which is
integrated out w.r.t \theta using a matrix normal-inverse Wishart prior. Among
other properties, this kernel can be easily computed when the dimension d of
the time series is much larger than the lengths of the considered time series x
and x'. It can also be generalized to time series taking values in arbitrary
state spaces, as long as the state space itself is endowed with a kernel
\kappa. In that case, the kernel between x and x' is a a function of the Gram
matrices produced by \kappa on observations and subsequences of observations
enumerated in x and x'. We describe a computationally efficient implementation
of this generalization that uses low-rank matrix factorization techniques.
These kernels are compared to other known kernels using a set of benchmark
classification tasks carried out with support vector machines
Convergence of the structure function of a Multifractal Random Walk in a mixed asymptotic setting
Some asymptotic properties of a Brownian motion in multifractal time, also
called multifractal random walk, are established. We show the almost sure and
convergence of its structure function. This is an issue directly
connected to the scale invariance and multifractal property of the sample
paths. We place ourselves in a mixed asymptotic setting where both the
observation length and the sampling frequency may go together to infinity at
different rates. The results we obtain are similar to the ones that were given
by Ossiander and Waymire and Bacry \emph{et al.} in the simpler framework of
Mandelbrot cascades.Comment: 29 pages, 3 figure
Parametric estimation of the driving L\'evy process of multivariate CARMA processes from discrete observations
We consider the parametric estimation of the driving L\'evy process of a
multivariate continuous-time autoregressive moving average (MCARMA) process,
which is observed on the discrete time grid . Beginning with a
new state space representation, we develop a method to recover the driving
L\'evy process exactly from a continuous record of the observed MCARMA process.
We use tools from numerical analysis and the theory of infinitely divisible
distributions to extend this result to allow for the approximate recovery of
unit increments of the driving L\'evy process from discrete-time observations
of the MCARMA process. We show that, if the sampling interval is chosen
dependent on , the length of the observation horizon, such that
converges to zero as tends to infinity, then any suitable generalized
method of moments estimator based on this reconstructed sample of unit
increments has the same asymptotic distribution as the one based on the true
increments, and is, in particular, asymptotically normally distributed.Comment: 38 pages, four figures; to appear in Journal of Multivariate Analysi
- …