8 research outputs found

    Computing the homology of basic semialgebraic sets in weak exponential time

    Get PDF
    We describe and analyze an algorithm for computing the homology (Betti numbers and torsion coefficients) of basic semialgebraic sets which works in weak exponential time. That is, out of a set of exponentially small measure in the space of data the cost of the algorithm is exponential in the size of the data. All algorithms previously proposed for this problem have a complexity which is doubly exponential (and this is so for almost all data)

    Nonlinear dimension reduction for surrogate modeling using gradient information

    Get PDF
    We introduce a method for the nonlinear dimension reduction of a high-dimensional function u:Rd→Ru:\mathbb{R}^d\rightarrow\mathbb{R}, d≫1d\gg1. Our objective is to identify a nonlinear feature map g:Rd→Rmg:\mathbb{R}^d\rightarrow\mathbb{R}^m, with a prescribed intermediate dimension m≪dm\ll d, so that uu can be well approximated by f∘gf\circ g for some profile function f:Rm→Rf:\mathbb{R}^m\rightarrow\mathbb{R}. We propose to build the feature map by aligning the Jacobian ∇g\nabla g with the gradient ∇u\nabla u, and we theoretically analyze the properties of the resulting gg. Once gg is built, we construct ff by solving a gradient-enhanced least squares problem. Our practical algorithm makes use of a sample {x(i),u(x(i)),∇u(x(i))}i=1N\{x^{(i)},u(x^{(i)}),\nabla u(x^{(i)})\}_{i=1}^N and builds both gg and ff on adaptive downward-closed polynomial spaces, using cross validation to avoid overfitting. We numerically evaluate the performance of our algorithm across different benchmarks, and explore the impact of the intermediate dimension mm. We show that building a nonlinear feature map gg can permit more accurate approximation of uu than a linear gg, for the same input data set

    Nonlinear dimension reduction for surrogate modeling using gradient information

    Get PDF
    We introduce a method for the nonlinear dimension reduction of a high-dimensional function u:Rd→Ru:\mathbb{R}^d\rightarrow\mathbb{R}, d≫1d\gg1. Our objective is to identify a nonlinear feature map g:Rd→Rmg:\mathbb{R}^d\rightarrow\mathbb{R}^m, with a prescribed intermediate dimension m≪dm\ll d, so that uu can be well approximated by f∘gf\circ g for some profile function f:Rm→Rf:\mathbb{R}^m\rightarrow\mathbb{R}. We propose to build the feature map by aligning the Jacobian ∇g\nabla g with the gradient ∇u\nabla u, and we theoretically analyze the properties of the resulting gg. Once gg is built, we construct ff by solving a gradient-enhanced least squares problem. Our practical algorithm makes use of a sample {x(i),u(x(i)),∇u(x(i))}i=1N\{x^{(i)},u(x^{(i)}),\nabla u(x^{(i)})\}_{i=1}^N and builds both gg and ff on adaptive downward-closed polynomial spaces, using cross validation to avoid overfitting. We numerically evaluate the performance of our algorithm across different benchmarks, and explore the impact of the intermediate dimension mm. We show that building a nonlinear feature map gg can permit more accurate approximation of uu than a linear gg, for the same input data set
    corecore