8 research outputs found
Computing the homology of basic semialgebraic sets in weak exponential time
We describe and analyze an algorithm for computing the homology (Betti
numbers and torsion coefficients) of basic semialgebraic sets which works in
weak exponential time. That is, out of a set of exponentially small measure in
the space of data the cost of the algorithm is exponential in the size of the
data. All algorithms previously proposed for this problem have a complexity
which is doubly exponential (and this is so for almost all data)
Nonlinear dimension reduction for surrogate modeling using gradient information
We introduce a method for the nonlinear dimension reduction of a
high-dimensional function , . Our
objective is to identify a nonlinear feature map
, with a prescribed intermediate
dimension , so that can be well approximated by for some
profile function . We propose to build the
feature map by aligning the Jacobian with the gradient ,
and we theoretically analyze the properties of the resulting . Once is
built, we construct by solving a gradient-enhanced least squares problem.
Our practical algorithm makes use of a sample and builds both and on adaptive downward-closed
polynomial spaces, using cross validation to avoid overfitting. We numerically
evaluate the performance of our algorithm across different benchmarks, and
explore the impact of the intermediate dimension . We show that building a
nonlinear feature map can permit more accurate approximation of than a
linear , for the same input data set
Nonlinear dimension reduction for surrogate modeling using gradient information
We introduce a method for the nonlinear dimension reduction of a high-dimensional function , . Our objective is to identify a nonlinear feature map , with a prescribed intermediate dimension , so that can be well approximated by for some profile function . We propose to build the feature map by aligning the Jacobian with the gradient , and we theoretically analyze the properties of the resulting . Once is built, we construct by solving a gradient-enhanced least squares problem. Our practical algorithm makes use of a sample and builds both and on adaptive downward-closed polynomial spaces, using cross validation to avoid overfitting. We numerically evaluate the performance of our algorithm across different benchmarks, and explore the impact of the intermediate dimension . We show that building a nonlinear feature map can permit more accurate approximation of than a linear , for the same input data set