3,316 research outputs found

    Multivariate GARCH estimation via a Bregman-proximal trust-region method

    Full text link
    The estimation of multivariate GARCH time series models is a difficult task mainly due to the significant overparameterization exhibited by the problem and usually referred to as the "curse of dimensionality". For example, in the case of the VEC family, the number of parameters involved in the model grows as a polynomial of order four on the dimensionality of the problem. Moreover, these parameters are subjected to convoluted nonlinear constraints necessary to ensure, for instance, the existence of stationary solutions and the positive semidefinite character of the conditional covariance matrices used in the model design. So far, this problem has been addressed in the literature only in low dimensional cases with strong parsimony constraints. In this paper we propose a general formulation of the estimation problem in any dimension and develop a Bregman-proximal trust-region method for its solution. The Bregman-proximal approach allows us to handle the constraints in a very efficient and natural way by staying in the primal space and the Trust-Region mechanism stabilizes and speeds up the scheme. Preliminary computational experiments are presented and confirm the very good performances of the proposed approach.Comment: 35 pages, 5 figure

    A Riemannian Trust Region Method for the Canonical Tensor Rank Approximation Problem

    Full text link
    The canonical tensor rank approximation problem (TAP) consists of approximating a real-valued tensor by one of low canonical rank, which is a challenging non-linear, non-convex, constrained optimization problem, where the constraint set forms a non-smooth semi-algebraic set. We introduce a Riemannian Gauss-Newton method with trust region for solving small-scale, dense TAPs. The novelty of our approach is threefold. First, we parametrize the constraint set as the Cartesian product of Segre manifolds, hereby formulating the TAP as a Riemannian optimization problem, and we argue why this parametrization is among the theoretically best possible. Second, an original ST-HOSVD-based retraction operator is proposed. Third, we introduce a hot restart mechanism that efficiently detects when the optimization process is tending to an ill-conditioned tensor rank decomposition and which often yields a quick escape path from such spurious decompositions. Numerical experiments show improvements of up to three orders of magnitude in terms of the expected time to compute a successful solution over existing state-of-the-art methods
    corecore