36 research outputs found
Error estimates of a Fourier integrator for the cubic Schr\"odinger equation at low regularity
We present a new filtered low-regularity Fourier integrator for the cubic
nonlinear Schr\"odinger equation based on recent time discretization and
filtering techniques. For this new scheme, we perform a rigorous error analysis
and establish better convergence rates at low regularity than known for
classical schemes in the literature so far. In our error estimates, we combine
the better local error properties of the new scheme with a stability analysis
based on general discrete Strichartz-type estimates. The latter allow us to
handle a much rougher class of solutions as the error analysis can be carried
out directly at the level of compared to classical results \black in
dimension , \black which are limited to higher-order (sufficiently smooth)
Sobolev spaces with . In particular, we are able to establish a
global error estimate in for solutions which is roughly of order
in dimension (
denoting the time discretization parameter). This breaks the "natural order
barrier" of for solutions which holds for classical
numerical schemes (even in combination with suitable filter functions)
A Fourier integrator for the cubic nonlinear Schr\"{o}dinger equation with rough initial data
Standard numerical integrators suffer from an order reduction when applied to
nonlinear Schr\"{o}dinger equations with low-regularity initial data. For
example, standard Strang splitting requires the boundedness of the solution in
in order to be second-order convergent in , i.e., it requires
the boundedness of four additional derivatives of the solution. We present a
new type of integrator that is based on the variation-of-constants formula and
makes use of certain resonance based approximations in Fourier space. The
latter can be efficiently evaluated by fast Fourier methods. For second-order
convergence, the new integrator requires two additional derivatives of the
solution in one space dimension, and three derivatives in higher space
dimensions. Numerical examples illustrating our convergence results are
included. These examples demonstrate the clear advantage of the Fourier
integrator over standard Strang splitting for initial data with low regularity
mlr3spatiotempcv: Spatiotemporal resampling methods for machine learning in R
Spatial and spatiotemporal machine-learning models require a suitable
framework for their model assessment, model selection, and hyperparameter
tuning, in order to avoid error estimation bias and over-fitting. This
contribution reviews the state-of-the-art in spatial and spatiotemporal
cross-validation, and introduces the {R} package {mlr3spatiotempcv} as an
extension package of the machine-learning framework {mlr3}. Currently various
{R} packages implementing different spatiotemporal partitioning strategies
exist: {blockCV}, {CAST}, {skmeans} and {sperrorest}. The goal of
{mlr3spatiotempcv} is to gather the available spatiotemporal resampling methods
in {R} and make them available to users through a simple and common interface.
This is made possible by integrating the package directly into the {mlr3}
machine-learning framework, which already has support for generic
non-spatiotemporal resampling methods such as random partitioning. One
advantage is the use of a consistent nomenclature in an overarching
machine-learning toolkit instead of a varying package-specific syntax, making
it easier for users to choose from a variety of spatiotemporal resampling
methods. This package avoids giving recommendations which method to use in
practice as this decision depends on the predictive task at hand, the
autocorrelation within the data, and the spatial structure of the sampling
design or geographic objects being studied.Comment: 35 pages, 15 Figures, 1 Tabl
Lowregularity exponential-type integrators for semilinear Schrödinger equations
We introduce low regularity exponential-type integrators for nonlinear Schrödinger equations for which first-order convergence only requires the boundedness of one additional derivative of the solution. More precisely, we will prove first-order convergence in H for solutions in H (r > d/2) of the derived schemes. This allows us lower regularity assumptions on the data than for instance required for classical splitting or exponential integration schemes. For one dimensional quadratic Schrödinger equations we can even prove first-order convergence without any loss of regularity. Numerical experiments underline the favorable error behavior of the newly introduced exponential-type integrators for low regularity solutions compared to classical splitting and exponential integration schemes
Performance evaluation and hyperparameter tuning of statistical and machine-learning models using spatial data
Machine-learning algorithms have gained popularity in recent years in the
field of ecological modeling due to their promising results in predictive
performance of classification problems. While the application of such
algorithms has been highly simplified in the last years due to their
well-documented integration in commonly used statistical programming languages
such as R, there are several practical challenges in the field of ecological
modeling related to unbiased performance estimation, optimization of algorithms
using hyperparameter tuning and spatial autocorrelation. We address these
issues in the comparison of several widely used machine-learning algorithms
such as Boosted Regression Trees (BRT), k-Nearest Neighbor (WKNN), Random
Forest (RF) and Support Vector Machine (SVM) to traditional parametric
algorithms such as logistic regression (GLM) and semi-parametric ones like
generalized additive models (GAM). Different nested cross-validation methods
including hyperparameter tuning methods are used to evaluate model performances
with the aim to receive bias-reduced performance estimates. As a case study the
spatial distribution of forest disease Diplodia sapinea in the Basque Country
in Spain is investigated using common environmental variables such as
temperature, precipitation, soil or lithology as predictors. Results show that
GAM and RF (mean AUROC estimates 0.708 and 0.699) outperform all other methods
in predictive accuracy. The effect of hyperparameter tuning saturates at around
50 iterations for this data set. The AUROC differences between the bias-reduced
(spatial cross-validation) and overoptimistic (non-spatial cross-validation)
performance estimates of the GAM and RF are 0.167 (24%) and 0.213 (30%),
respectively. It is recommended to also use spatial partitioning for
cross-validation hyperparameter tuning of spatial data