4,692 research outputs found
Variational Downscaling, Fusion and Assimilation of Hydrometeorological States via Regularized Estimation
Improved estimation of hydrometeorological states from down-sampled
observations and background model forecasts in a noisy environment, has been a
subject of growing research in the past decades. Here, we introduce a unified
framework that ties together the problems of downscaling, data fusion and data
assimilation as ill-posed inverse problems. This framework seeks solutions
beyond the classic least squares estimation paradigms by imposing proper
regularization, which are constraints consistent with the degree of smoothness
and probabilistic structure of the underlying state. We review relevant
regularization methods in derivative space and extend classic formulations of
the aforementioned problems with particular emphasis on hydrologic and
atmospheric applications. Informed by the statistical characteristics of the
state variable of interest, the central results of the paper suggest that
proper regularization can lead to a more accurate and stable recovery of the
true state and hence more skillful forecasts. In particular, using the Tikhonov
and Huber regularization in the derivative space, the promise of the proposed
framework is demonstrated in static downscaling and fusion of synthetic
multi-sensor precipitation data, while a data assimilation numerical experiment
is presented using the heat equation in a variational setting
Regularization and Bayesian Learning in Dynamical Systems: Past, Present and Future
Regularization and Bayesian methods for system identification have been
repopularized in the recent years, and proved to be competitive w.r.t.
classical parametric approaches. In this paper we shall make an attempt to
illustrate how the use of regularization in system identification has evolved
over the years, starting from the early contributions both in the Automatic
Control as well as Econometrics and Statistics literature. In particular we
shall discuss some fundamental issues such as compound estimation problems and
exchangeability which play and important role in regularization and Bayesian
approaches, as also illustrated in early publications in Statistics. The
historical and foundational issues will be given more emphasis (and space), at
the expense of the more recent developments which are only briefly discussed.
The main reason for such a choice is that, while the recent literature is
readily available, and surveys have already been published on the subject, in
the author's opinion a clear link with past work had not been completely
clarified.Comment: Plenary Presentation at the IFAC SYSID 2015. Submitted to Annual
Reviews in Contro
Convergence rates of Kernel Conjugate Gradient for random design regression
We prove statistical rates of convergence for kernel-based least squares
regression from i.i.d. data using a conjugate gradient algorithm, where
regularization against overfitting is obtained by early stopping. This method
is related to Kernel Partial Least Squares, a regression method that combines
supervised dimensionality reduction with least squares projection. Following
the setting introduced in earlier related literature, we study so-called "fast
convergence rates" depending on the regularity of the target regression
function (measured by a source condition in terms of the kernel integral
operator) and on the effective dimensionality of the data mapped into the
kernel space. We obtain upper bounds, essentially matching known minimax lower
bounds, for the (prediction) norm as well as for the stronger
Hilbert norm, if the true regression function belongs to the reproducing kernel
Hilbert space. If the latter assumption is not fulfilled, we obtain similar
convergence rates for appropriate norms, provided additional unlabeled data are
available
Optimal Rates for Spectral Algorithms with Least-Squares Regression over Hilbert Spaces
In this paper, we study regression problems over a separable Hilbert space
with the square loss, covering non-parametric regression over a reproducing
kernel Hilbert space. We investigate a class of spectral-regularized
algorithms, including ridge regression, principal component analysis, and
gradient methods. We prove optimal, high-probability convergence results in
terms of variants of norms for the studied algorithms, considering a capacity
assumption on the hypothesis space and a general source condition on the target
function. Consequently, we obtain almost sure convergence results with optimal
rates. Our results improve and generalize previous results, filling a
theoretical gap for the non-attainable cases
Multi-Target Prediction: A Unifying View on Problems and Methods
Multi-target prediction (MTP) is concerned with the simultaneous prediction
of multiple target variables of diverse type. Due to its enormous application
potential, it has developed into an active and rapidly expanding research field
that combines several subfields of machine learning, including multivariate
regression, multi-label classification, multi-task learning, dyadic prediction,
zero-shot learning, network inference, and matrix completion. In this paper, we
present a unifying view on MTP problems and methods. First, we formally discuss
commonalities and differences between existing MTP problems. To this end, we
introduce a general framework that covers the above subfields as special cases.
As a second contribution, we provide a structured overview of MTP methods. This
is accomplished by identifying a number of key properties, which distinguish
such methods and determine their suitability for different types of problems.
Finally, we also discuss a few challenges for future research
Sharp analysis of low-rank kernel matrix approximations
We consider supervised learning problems within the positive-definite kernel
framework, such as kernel ridge regression, kernel logistic regression or the
support vector machine. With kernels leading to infinite-dimensional feature
spaces, a common practical limiting difficulty is the necessity of computing
the kernel matrix, which most frequently leads to algorithms with running time
at least quadratic in the number of observations n, i.e., O(n^2). Low-rank
approximations of the kernel matrix are often considered as they allow the
reduction of running time complexities to O(p^2 n), where p is the rank of the
approximation. The practicality of such methods thus depends on the required
rank p. In this paper, we show that in the context of kernel ridge regression,
for approximations based on a random subset of columns of the original kernel
matrix, the rank p may be chosen to be linear in the degrees of freedom
associated with the problem, a quantity which is classically used in the
statistical analysis of such methods, and is often seen as the implicit number
of parameters of non-parametric estimators. This result enables simple
algorithms that have sub-quadratic running time complexity, but provably
exhibit the same predictive performance than existing algorithms, for any given
problem instance, and not only for worst-case situations
Kernel-based stochastic collocation for the random two-phase Navier-Stokes equations
In this work, we apply stochastic collocation methods with radial kernel
basis functions for an uncertainty quantification of the random incompressible
two-phase Navier-Stokes equations. Our approach is non-intrusive and we use the
existing fluid dynamics solver NaSt3DGPF to solve the incompressible two-phase
Navier-Stokes equation for each given realization. We are able to empirically
show that the resulting kernel-based stochastic collocation is highly
competitive in this setting and even outperforms some other standard methods
- âŠ