695 research outputs found

    Higher-order principal component analysis for the approximation of tensors in tree-based low-rank formats

    Full text link
    This paper is concerned with the approximation of tensors using tree-based tensor formats, which are tensor networks whose graphs are dimension partition trees. We consider Hilbert tensor spaces of multivariate functions defined on a product set equipped with a probability measure. This includes the case of multidimensional arrays corresponding to finite product sets. We propose and analyse an algorithm for the construction of an approximation using only point evaluations of a multivariate function, or evaluations of some entries of a multidimensional array. The algorithm is a variant of higher-order singular value decomposition which constructs a hierarchy of subspaces associated with the different nodes of the tree and a corresponding hierarchy of interpolation operators. Optimal subspaces are estimated using empirical principal component analysis of interpolations of partial random evaluations of the function. The algorithm is able to provide an approximation in any tree-based format with either a prescribed rank or a prescribed relative error, with a number of evaluations of the order of the storage complexity of the approximation format. Under some assumptions on the estimation of principal components, we prove that the algorithm provides either a quasi-optimal approximation with a given rank, or an approximation satisfying the prescribed relative error, up to constants depending on the tree and the properties of interpolation operators. The analysis takes into account the discretization errors for the approximation of infinite-dimensional tensors. Several numerical examples illustrate the main results and the behavior of the algorithm for the approximation of high-dimensional functions using hierarchical Tucker or tensor train tensor formats, and the approximation of univariate functions using tensorization

    Indirect supervision of hedge funds.

    Get PDF
    Many risks associated with hedge funds can be addressed through indirect measures aimed at the hedge funds’ counterparties and creditors, nearly all of which are regulated banks and securities fi rms. Hence, we consider here how indirect supervision has been made more effective over time and how we should be continuing to make it more effective in practice. The theoretical usefulness of hedge funds in making markets more effi cient and more stable is undisputed but does not always materialize in practice. In order to preserve market effi ciency and fi nancial stability, we need therefore to increase incentives for an effective and long lasting market discipline. Not to do anything is simply not an option given the growth of the hedge fund industry and the fact that hedge funds often act no differently from other financial institutions, whose history has shown worth a look by fi nancial watchdogs. Risk management needs to continuously keep pace with fi nancial innovation. This is a challenge for the indirect supervision of hedge funds but also a support to the pragmatism of this approach. In order to be able to press banks to put enough emphasis on sound risk management, international cooperation is required. Without an international level playing field, short term competitive pressure between banks would indeed most likely derail our efforts. This is a strong and welcome incentive for regulators to be efficient. In addition, the cooperation between banking and securities supervisors should continue to allow indirect supervision to be strengthened and updated as characteristics of the hedge fund business evolve over time. The first mitigant against the risks associated, for any single institution, with hedge funds is robust internal risk management systems. Hence, specific attention is warranted as regards access by banks to more comprehensive information on their Highly Leveraged Institutions (HLI) counterparties, better incorporation of counterparties’ transparency and credit quality into collateral policies, effective improvements of complex products exposures’ measurement (due account being taken of model risks), enhancements to stress testing (in particular liquidity stress testing). In addition, indirect supervision needs to be leveraged by an improvement in hedge funds broad transparency to the market. Stress tests, indeed, should enable banks to capture their full exposure to a suffi ciently broad range of adverse conditions, including not only their direct exposure to a particular hedge fund but also their overall exposure to market dislocations that might be associated with the failure of one or several hedge funds (second round effects). A second mitigant is an efficient oversight, in particular by banking supervisors, of the trading relations that hedge funds have with their counterparties. In this respect, the Pillar 2 of Basel II (namely the supervisory review process which will deal with all banking risks beyond those covered by Pillar 1 regulatory capital charges) will incorporate some of the risks specifically concentrated in hedge fund exposures, i.e. liquidity risk, concentration risk, tail risk, model risk... It seems also now critical to check that banks’ internal information systems are capable of capturing the full range of exposures to hedge funds. Finally, banks are required by supervisors to hold regulatory capital as a buffer in relation to the risks they take. This capital adequacy requirement forms the third line of defence against the risks that a financial institution assumes today when dealing with hedge funds. Last but not least, micro and macro prudential targets converge when banking supervisors press each individual institution for more comprehensive stress tests and the related risk management actions, including against second round effects, i.e. against systemic instability.

    Geometric Structures in Tensor Representations (Final Release)

    Get PDF
    The main goal of this paper is to study the geometric structures associated with the representation of tensors in subspace based formats. To do this we use a property of the so-called minimal subspaces which allows us to describe the tensor representation by means of a rooted tree. By using the tree structure and the dimensions of the associated minimal subspaces, we introduce, in the underlying algebraic tensor space, the set of tensors in a tree-based format with either bounded or fixed tree-based rank. This class contains the Tucker format and the Hierarchical Tucker format (including the Tensor Train format). In particular, we show that the set of tensors in the tree-based format with bounded (respectively, fixed) tree-based rank of an algebraic tensor product of normed vector spaces is an analytic Banach manifold. Indeed, the manifold geometry for the set of tensors with fixed tree-based rank is induced by a fibre bundle structure and the manifold geometry for the set of tensors with bounded tree-based rank is given by a finite union of connected components. In order to describe the relationship between these manifolds and the natural ambient space, we introduce the definition of topological tensor spaces in the tree-based format. We prove under natural conditions that any tensor of the topological tensor space under consideration admits best approximations in the manifold of tensors in the tree-based format with bounded tree-based rank. In this framework, we also show that the tangent (Banach) space at a given tensor is a complemented subspace in the natural ambient tensor Banach space and hence the set of tensors in the tree-based format with bounded (respectively, fixed) tree-based rank is an immersed submanifold. This fact allows us to extend the Dirac-Frenkel variational principle in the framework of topological tensor spaces.Comment: Some errors are corrected and Lemma 3.22 is improve

    Projection based model order reduction methods for the estimation of vector-valued variables of interest

    Full text link
    We propose and compare goal-oriented projection based model order reduction methods for the estimation of vector-valued functionals of the solution of parameter-dependent equations. The first projection method is a generalization of the classical primal-dual method to the case of vector-valued variables of interest. We highlight the role played by three reduced spaces: the approximation space and the test space associated to the primal variable, and the approximation space associated to the dual variable. Then we propose a Petrov-Galerkin projection method based on a saddle point problem involving an approximation space for the primal variable and an approximation space for an auxiliary variable. A goal-oriented choice of the latter space, defined as the sum of two spaces, allows us to improve the approximation of the variable of interest compared to a primal-dual method using the same reduced spaces. Then, for both approaches, we derive computable error estimates for the approximations of the variable of interest and we propose greedy algorithms for the goal-oriented construction of reduced spaces. The performance of the algorithms are illustrated on numerical examples and compared to standard (non goal-oriented) algorithms

    Principal bundle structure of matrix manifolds

    Full text link
    In this paper, we introduce a new geometric description of the manifolds of matrices of fixed rank. The starting point is a geometric description of the Grassmann manifold Gr(Rk)\mathbb{G}_r(\mathbb{R}^k) of linear subspaces of dimension r<kr<k in Rk\mathbb{R}^k which avoids the use of equivalence classes. The set Gr(Rk)\mathbb{G}_r(\mathbb{R}^k) is equipped with an atlas which provides it with the structure of an analytic manifold modelled on R(kr)×r\mathbb{R}^{(k-r)\times r}. Then we define an atlas for the set Mr(Rk×r)\mathcal{M}_r(\mathbb{R}^{k \times r}) of full rank matrices and prove that the resulting manifold is an analytic principal bundle with base Gr(Rk)\mathbb{G}_r(\mathbb{R}^k) and typical fibre GLr\mathrm{GL}_r, the general linear group of invertible matrices in Rk×k\mathbb{R}^{k\times k}. Finally, we define an atlas for the set Mr(Rn×m)\mathcal{M}_r(\mathbb{R}^{n \times m}) of non-full rank matrices and prove that the resulting manifold is an analytic principal bundle with base Gr(Rn)×Gr(Rm)\mathbb{G}_r(\mathbb{R}^n) \times \mathbb{G}_r(\mathbb{R}^m) and typical fibre GLr\mathrm{GL}_r. The atlas of Mr(Rn×m)\mathcal{M}_r(\mathbb{R}^{n \times m}) is indexed on the manifold itself, which allows a natural definition of a neighbourhood for a given matrix, this neighbourhood being proved to possess the structure of a Lie group. Moreover, the set Mr(Rn×m)\mathcal{M}_r(\mathbb{R}^{n \times m}) equipped with the topology induced by the atlas is proven to be an embedded submanifold of the matrix space Rn×m\mathbb{R}^{n \times m} equipped with the subspace topology. The proposed geometric description then results in a description of the matrix space Rn×m\mathbb{R}^{n \times m}, seen as the union of manifolds Mr(Rn×m)\mathcal{M}_r(\mathbb{R}^{n \times m}), as an analytic manifold equipped with a topology for which the matrix rank is a continuous map

    Tensor-based numerical method for stochastic homogenisation

    Full text link
    This paper addresses the complexity reduction of stochastic homogenisation of a class of random materials for a stationary diffusion equation. A cost-efficient approximation of the correctors is built using a method designed to exploit quasi-periodicity. Accuracy and cost reduction are investigated for local perturbations or small transformations of periodic materials as well as for materials with no periodicity but a mesoscopic structure, for which the limitations of the method are shown. Finally, for materials outside the scope of this method, we propose to use the approximation of homogenised quantities as control variates for variance reduction of a more accurate and costly Monte Carlo estimator (using a multi-fidelity Monte Carlo method). The resulting cost reduction is illustrated in a numerical experiment with a control variate from weakly stochastic homogenisation for comparison, and the limits of this variance reduction technique are tested on materials without periodicity or mesoscopic structure

    Méthodes spectrales stochastiques et réduction de modèle pour la propagation d'incertitudes paramétriques dans les modèles physiques

    Get PDF
    National audienceLa quantification des incertitudes apparaît comme une voie essentielle pour l'amélioration de la prédictibilité des modèles physiques

    Dynamical model reduction method for solving parameter-dependent dynamical systems

    Full text link
    We propose a projection-based model order reduction method for the solution of parameter-dependent dynamical systems. The proposed method relies on the construction of time-dependent reduced spaces generated from evaluations of the solution of the full-order model at some selected parameters values. The approximation obtained by Galerkin projection is the solution of a reduced dynamical system with a modified flux which takes into account the time dependency of the reduced spaces. An a posteriori error estimate is derived and a greedy algorithm using this error estimate is proposed for the adaptive selection of parameters values. The resulting method can be interpreted as a dynamical low-rank approximation method with a subspace point of view and a uniform control of the error over the parameter set

    Low-rank approximate inverse for preconditioning tensor-structured linear systems

    Full text link
    In this paper, we propose an algorithm for the construction of low-rank approximations of the inverse of an operator given in low-rank tensor format. The construction relies on an updated greedy algorithm for the minimization of a suitable distance to the inverse operator. It provides a sequence of approximations that are defined as the projections of the inverse operator in an increasing sequence of linear subspaces of operators. These subspaces are obtained by the tensorization of bases of operators that are constructed from successive rank-one corrections. In order to handle high-order tensors, approximate projections are computed in low-rank Hierarchical Tucker subsets of the successive subspaces of operators. Some desired properties such as symmetry or sparsity can be imposed on the approximate inverse operator during the correction step, where an optimal rank-one correction is searched as the tensor product of operators with the desired properties. Numerical examples illustrate the ability of this algorithm to provide efficient preconditioners for linear systems in tensor format that improve the convergence of iterative solvers and also the quality of the resulting low-rank approximations of the solution
    corecore