526 research outputs found

    Higher-order principal component analysis for the approximation of tensors in tree-based low-rank formats

    Full text link
    This paper is concerned with the approximation of tensors using tree-based tensor formats, which are tensor networks whose graphs are dimension partition trees. We consider Hilbert tensor spaces of multivariate functions defined on a product set equipped with a probability measure. This includes the case of multidimensional arrays corresponding to finite product sets. We propose and analyse an algorithm for the construction of an approximation using only point evaluations of a multivariate function, or evaluations of some entries of a multidimensional array. The algorithm is a variant of higher-order singular value decomposition which constructs a hierarchy of subspaces associated with the different nodes of the tree and a corresponding hierarchy of interpolation operators. Optimal subspaces are estimated using empirical principal component analysis of interpolations of partial random evaluations of the function. The algorithm is able to provide an approximation in any tree-based format with either a prescribed rank or a prescribed relative error, with a number of evaluations of the order of the storage complexity of the approximation format. Under some assumptions on the estimation of principal components, we prove that the algorithm provides either a quasi-optimal approximation with a given rank, or an approximation satisfying the prescribed relative error, up to constants depending on the tree and the properties of interpolation operators. The analysis takes into account the discretization errors for the approximation of infinite-dimensional tensors. Several numerical examples illustrate the main results and the behavior of the algorithm for the approximation of high-dimensional functions using hierarchical Tucker or tensor train tensor formats, and the approximation of univariate functions using tensorization

    Indirect supervision of hedge funds.

    Get PDF
    Many risks associated with hedge funds can be addressed through indirect measures aimed at the hedge funds’ counterparties and creditors, nearly all of which are regulated banks and securities fi rms. Hence, we consider here how indirect supervision has been made more effective over time and how we should be continuing to make it more effective in practice. The theoretical usefulness of hedge funds in making markets more effi cient and more stable is undisputed but does not always materialize in practice. In order to preserve market effi ciency and fi nancial stability, we need therefore to increase incentives for an effective and long lasting market discipline. Not to do anything is simply not an option given the growth of the hedge fund industry and the fact that hedge funds often act no differently from other financial institutions, whose history has shown worth a look by fi nancial watchdogs. Risk management needs to continuously keep pace with fi nancial innovation. This is a challenge for the indirect supervision of hedge funds but also a support to the pragmatism of this approach. In order to be able to press banks to put enough emphasis on sound risk management, international cooperation is required. Without an international level playing field, short term competitive pressure between banks would indeed most likely derail our efforts. This is a strong and welcome incentive for regulators to be efficient. In addition, the cooperation between banking and securities supervisors should continue to allow indirect supervision to be strengthened and updated as characteristics of the hedge fund business evolve over time. The first mitigant against the risks associated, for any single institution, with hedge funds is robust internal risk management systems. Hence, specific attention is warranted as regards access by banks to more comprehensive information on their Highly Leveraged Institutions (HLI) counterparties, better incorporation of counterparties’ transparency and credit quality into collateral policies, effective improvements of complex products exposures’ measurement (due account being taken of model risks), enhancements to stress testing (in particular liquidity stress testing). In addition, indirect supervision needs to be leveraged by an improvement in hedge funds broad transparency to the market. Stress tests, indeed, should enable banks to capture their full exposure to a suffi ciently broad range of adverse conditions, including not only their direct exposure to a particular hedge fund but also their overall exposure to market dislocations that might be associated with the failure of one or several hedge funds (second round effects). A second mitigant is an efficient oversight, in particular by banking supervisors, of the trading relations that hedge funds have with their counterparties. In this respect, the Pillar 2 of Basel II (namely the supervisory review process which will deal with all banking risks beyond those covered by Pillar 1 regulatory capital charges) will incorporate some of the risks specifically concentrated in hedge fund exposures, i.e. liquidity risk, concentration risk, tail risk, model risk... It seems also now critical to check that banks’ internal information systems are capable of capturing the full range of exposures to hedge funds. Finally, banks are required by supervisors to hold regulatory capital as a buffer in relation to the risks they take. This capital adequacy requirement forms the third line of defence against the risks that a financial institution assumes today when dealing with hedge funds. Last but not least, micro and macro prudential targets converge when banking supervisors press each individual institution for more comprehensive stress tests and the related risk management actions, including against second round effects, i.e. against systemic instability.

    Low-rank approximate inverse for preconditioning tensor-structured linear systems

    Full text link
    In this paper, we propose an algorithm for the construction of low-rank approximations of the inverse of an operator given in low-rank tensor format. The construction relies on an updated greedy algorithm for the minimization of a suitable distance to the inverse operator. It provides a sequence of approximations that are defined as the projections of the inverse operator in an increasing sequence of linear subspaces of operators. These subspaces are obtained by the tensorization of bases of operators that are constructed from successive rank-one corrections. In order to handle high-order tensors, approximate projections are computed in low-rank Hierarchical Tucker subsets of the successive subspaces of operators. Some desired properties such as symmetry or sparsity can be imposed on the approximate inverse operator during the correction step, where an optimal rank-one correction is searched as the tensor product of operators with the desired properties. Numerical examples illustrate the ability of this algorithm to provide efficient preconditioners for linear systems in tensor format that improve the convergence of iterative solvers and also the quality of the resulting low-rank approximations of the solution

    A tensor approximation method based on ideal minimal residual formulations for the solution of high-dimensional problems

    Full text link
    In this paper, we propose a method for the approximation of the solution of high-dimensional weakly coercive problems formulated in tensor spaces using low-rank approximation formats. The method can be seen as a perturbation of a minimal residual method with residual norm corresponding to the error in a specified solution norm. We introduce and analyze an iterative algorithm that is able to provide a controlled approximation of the optimal approximation of the solution in a given low-rank subset, without any a priori information on this solution. We also introduce a weak greedy algorithm which uses this perturbed minimal residual method for the computation of successive greedy corrections in small tensor subsets. We prove its convergence under some conditions on the parameters of the algorithm. The residual norm can be designed such that the resulting low-rank approximations are quasi-optimal with respect to particular norms of interest, thus yielding to goal-oriented order reduction strategies for the approximation of high-dimensional problems. The proposed numerical method is applied to the solution of a stochastic partial differential equation which is discretized using standard Galerkin methods in tensor product spaces

    Geometric Structures in Tensor Representations (Final Release)

    Get PDF
    The main goal of this paper is to study the geometric structures associated with the representation of tensors in subspace based formats. To do this we use a property of the so-called minimal subspaces which allows us to describe the tensor representation by means of a rooted tree. By using the tree structure and the dimensions of the associated minimal subspaces, we introduce, in the underlying algebraic tensor space, the set of tensors in a tree-based format with either bounded or fixed tree-based rank. This class contains the Tucker format and the Hierarchical Tucker format (including the Tensor Train format). In particular, we show that the set of tensors in the tree-based format with bounded (respectively, fixed) tree-based rank of an algebraic tensor product of normed vector spaces is an analytic Banach manifold. Indeed, the manifold geometry for the set of tensors with fixed tree-based rank is induced by a fibre bundle structure and the manifold geometry for the set of tensors with bounded tree-based rank is given by a finite union of connected components. In order to describe the relationship between these manifolds and the natural ambient space, we introduce the definition of topological tensor spaces in the tree-based format. We prove under natural conditions that any tensor of the topological tensor space under consideration admits best approximations in the manifold of tensors in the tree-based format with bounded tree-based rank. In this framework, we also show that the tangent (Banach) space at a given tensor is a complemented subspace in the natural ambient tensor Banach space and hence the set of tensors in the tree-based format with bounded (respectively, fixed) tree-based rank is an immersed submanifold. This fact allows us to extend the Dirac-Frenkel variational principle in the framework of topological tensor spaces.Comment: Some errors are corrected and Lemma 3.22 is improve

    Principal bundle structure of matrix manifolds

    Full text link
    In this paper, we introduce a new geometric description of the manifolds of matrices of fixed rank. The starting point is a geometric description of the Grassmann manifold Gr(Rk)\mathbb{G}_r(\mathbb{R}^k) of linear subspaces of dimension r<kr<k in Rk\mathbb{R}^k which avoids the use of equivalence classes. The set Gr(Rk)\mathbb{G}_r(\mathbb{R}^k) is equipped with an atlas which provides it with the structure of an analytic manifold modelled on R(k−r)×r\mathbb{R}^{(k-r)\times r}. Then we define an atlas for the set Mr(Rk×r)\mathcal{M}_r(\mathbb{R}^{k \times r}) of full rank matrices and prove that the resulting manifold is an analytic principal bundle with base Gr(Rk)\mathbb{G}_r(\mathbb{R}^k) and typical fibre GLr\mathrm{GL}_r, the general linear group of invertible matrices in Rk×k\mathbb{R}^{k\times k}. Finally, we define an atlas for the set Mr(Rn×m)\mathcal{M}_r(\mathbb{R}^{n \times m}) of non-full rank matrices and prove that the resulting manifold is an analytic principal bundle with base Gr(Rn)×Gr(Rm)\mathbb{G}_r(\mathbb{R}^n) \times \mathbb{G}_r(\mathbb{R}^m) and typical fibre GLr\mathrm{GL}_r. The atlas of Mr(Rn×m)\mathcal{M}_r(\mathbb{R}^{n \times m}) is indexed on the manifold itself, which allows a natural definition of a neighbourhood for a given matrix, this neighbourhood being proved to possess the structure of a Lie group. Moreover, the set Mr(Rn×m)\mathcal{M}_r(\mathbb{R}^{n \times m}) equipped with the topology induced by the atlas is proven to be an embedded submanifold of the matrix space Rn×m\mathbb{R}^{n \times m} equipped with the subspace topology. The proposed geometric description then results in a description of the matrix space Rn×m\mathbb{R}^{n \times m}, seen as the union of manifolds Mr(Rn×m)\mathcal{M}_r(\mathbb{R}^{n \times m}), as an analytic manifold equipped with a topology for which the matrix rank is a continuous map
    • …
    corecore