12,080 research outputs found

    Greedy kernel methods for accelerating implicit integrators for parametric ODEs

    Full text link
    We present a novel acceleration method for the solution of parametric ODEs by single-step implicit solvers by means of greedy kernel-based surrogate models. In an offline phase, a set of trajectories is precomputed with a high-accuracy ODE solver for a selected set of parameter samples, and used to train a kernel model which predicts the next point in the trajectory as a function of the last one. This model is cheap to evaluate, and it is used in an online phase for new parameter samples to provide a good initialization point for the nonlinear solver of the implicit integrator. The accuracy of the surrogate reflects into a reduction of the number of iterations until convergence of the solver, thus providing an overall speedup of the full simulation. Interestingly, in addition to providing an acceleration, the accuracy of the solution is maintained, since the ODE solver is still used to guarantee the required precision. Although the method can be applied to a large variety of solvers and different ODEs, we will present in details its use with the Implicit Euler method for the solution of the Burgers equation, which results to be a meaningful test case to demonstrate the method's features

    The ROMES method for statistical modeling of reduced-order-model error

    Full text link
    This work presents a technique for statistically modeling errors introduced by reduced-order models. The method employs Gaussian-process regression to construct a mapping from a small number of computationally inexpensive `error indicators' to a distribution over the true error. The variance of this distribution can be interpreted as the (epistemic) uncertainty introduced by the reduced-order model. To model normed errors, the method employs existing rigorous error bounds and residual norms as indicators; numerical experiments show that the method leads to a near-optimal expected effectivity in contrast to typical error bounds. To model errors in general outputs, the method uses dual-weighted residuals---which are amenable to uncertainty control---as indicators. Experiments illustrate that correcting the reduced-order-model output with this surrogate can improve prediction accuracy by an order of magnitude; this contrasts with existing `multifidelity correction' approaches, which often fail for reduced-order models and suffer from the curse of dimensionality. The proposed error surrogates also lead to a notion of `probabilistic rigor', i.e., the surrogate bounds the error with specified probability

    Linear Time Feature Selection for Regularized Least-Squares

    Full text link
    We propose a novel algorithm for greedy forward feature selection for regularized least-squares (RLS) regression and classification, also known as the least-squares support vector machine or ridge regression. The algorithm, which we call greedy RLS, starts from the empty feature set, and on each iteration adds the feature whose addition provides the best leave-one-out cross-validation performance. Our method is considerably faster than the previously proposed ones, since its time complexity is linear in the number of training examples, the number of features in the original data set, and the desired size of the set of selected features. Therefore, as a side effect we obtain a new training algorithm for learning sparse linear RLS predictors which can be used for large scale learning. This speed is possible due to matrix calculus based short-cuts for leave-one-out and feature addition. We experimentally demonstrate the scalability of our algorithm and its ability to find good quality feature sets.Comment: 17 pages, 15 figure
    • …
    corecore