Gradient-based dimension reduction of multivariate vector-valued functions

Abstract

Multivariate functions encountered in high-dimensional uncertainty quantification problems often vary along a few dominant directions in the input parameter space. We propose a gradient-based method for detecting these directions and using them to construct ridge approximations of such functions, in a setting where the functions are vector-valued (e.g., taking values in Rn). The methodology consists of minimizing an upper bound on the approximation error, obtained by subspace Poincaré inequalities. We provide a thorough mathematical analysis in the case where theparameter space is equipped with a Gaussian probability measure. The resulting method generalizes the notion of active subspaces associated with scalar-valued functions. A numerical illustration shows that using gradients of the function yields effective dimension reduction. We also show how the choice of norm on the codomain of the function has an impact on the function's low-dimensional approximation

    Similar works