11,607 research outputs found
Efficient implementation of symplectic implicit Runge-Kutta schemes with simplified Newton iterations
We are concerned with the efficient implementation of symplectic implicit
Runge-Kutta (IRK) methods applied to systems of (non-necessarily Hamiltonian)
ordinary differential equations by means of Newton-like iterations. We pay
particular attention to symmetric symplectic IRK schemes (such as collocation
methods with Gaussian nodes). For a -stage IRK scheme used to integrate a
-dimensional system of ordinary differential equations, the application of
simplified versions of Newton iterations requires solving at each step several
linear systems (one per iteration) with the same real
coefficient matrix. We propose rewriting such -dimensional linear systems
as an equivalent -dimensional systems that can be solved by performing
the LU decompositions of real matrices of size . We
present a C implementation (based on Newton-like iterations) of Runge-Kutta
collocation methods with Gaussian nodes that make use of such a rewriting of
the linear system and that takes special care in reducing the effect of
round-off errors. We report some numerical experiments that demonstrate the
reduced round-off error propagation of our implementation
Chebyshev interpolation for functions with endpoint singularities via exponential and double-exponential transforms
We present five theorems concerning the asymptotic convergence rates of Chebyshev interpolation applied to functions transplanted to either a semi-infinite or an infinite interval under exponential or double-exponential transformations. This strategy is useful for approximating and computing with functions that are analytic apart from endpoint singularities. The use of Chebyshev polynomials instead of the more commonly used cardinal sinc or Fourier interpolants is important because it enables one to apply maps to semi-infinite intervals for functions which have only a single endpoint singularity. In such cases, this leads to significantly improved convergence rates
Robustness Verification of Support Vector Machines
We study the problem of formally verifying the robustness to adversarial
examples of support vector machines (SVMs), a major machine learning model for
classification and regression tasks. Following a recent stream of works on
formal robustness verification of (deep) neural networks, our approach relies
on a sound abstract version of a given SVM classifier to be used for checking
its robustness. This methodology is parametric on a given numerical abstraction
of real values and, analogously to the case of neural networks, needs neither
abstract least upper bounds nor widening operators on this abstraction. The
standard interval domain provides a simple instantiation of our abstraction
technique, which is enhanced with the domain of reduced affine forms, which is
an efficient abstraction of the zonotope abstract domain. This robustness
verification technique has been fully implemented and experimentally evaluated
on SVMs based on linear and nonlinear (polynomial and radial basis function)
kernels, which have been trained on the popular MNIST dataset of images and on
the recent and more challenging Fashion-MNIST dataset. The experimental results
of our prototype SVM robustness verifier appear to be encouraging: this
automated verification is fast, scalable and shows significantly high
percentages of provable robustness on the test set of MNIST, in particular
compared to the analogous provable robustness of neural networks
- …