106,765 research outputs found
Stable Gaussian Process based Tracking Control of Lagrangian Systems
High performance tracking control can only be achieved if a good model of the
dynamics is available. However, such a model is often difficult to obtain from
first order physics only. In this paper, we develop a data-driven control law
that ensures closed loop stability of Lagrangian systems. For this purpose, we
use Gaussian Process regression for the feed-forward compensation of the
unknown dynamics of the system. The gains of the feedback part are adapted
based on the uncertainty of the learned model. Thus, the feedback gains are
kept low as long as the learned model describes the true system sufficiently
precisely. We show how to select a suitable gain adaption law that incorporates
the uncertainty of the model to guarantee a globally bounded tracking error. A
simulation with a robot manipulator demonstrates the efficacy of the proposed
control law.Comment: Please cite the conference paper. arXiv admin note: text overlap with
arXiv:1806.0719
Robustness and Generalization
We derive generalization bounds for learning algorithms based on their
robustness: the property that if a testing sample is "similar" to a training
sample, then the testing error is close to the training error. This provides a
novel approach, different from the complexity or stability arguments, to study
generalization of learning algorithms. We further show that a weak notion of
robustness is both sufficient and necessary for generalizability, which implies
that robustness is a fundamental property for learning algorithms to work
A study of the classification of low-dimensional data with supervised manifold learning
Supervised manifold learning methods learn data representations by preserving
the geometric structure of data while enhancing the separation between data
samples from different classes. In this work, we propose a theoretical study of
supervised manifold learning for classification. We consider nonlinear
dimensionality reduction algorithms that yield linearly separable embeddings of
training data and present generalization bounds for this type of algorithms. A
necessary condition for satisfactory generalization performance is that the
embedding allow the construction of a sufficiently regular interpolation
function in relation with the separation margin of the embedding. We show that
for supervised embeddings satisfying this condition, the classification error
decays at an exponential rate with the number of training samples. Finally, we
examine the separability of supervised nonlinear embeddings that aim to
preserve the low-dimensional geometric structure of data based on graph
representations. The proposed analysis is supported by experiments on several
real data sets
A robust machine learning method for cell-load approximation in wireless networks
We propose a learning algorithm for cell-load approximation in wireless
networks. The proposed algorithm is robust in the sense that it is designed to
cope with the uncertainty arising from a small number of training samples. This
scenario is highly relevant in wireless networks where training has to be
performed on short time scales because of a fast time-varying communication
environment. The first part of this work studies the set of feasible rates and
shows that this set is compact. We then prove that the mapping relating a
feasible rate vector to the unique fixed point of the non-linear cell-load
mapping is monotone and uniformly continuous. Utilizing these properties, we
apply an approximation framework that achieves the best worst-case performance.
Furthermore, the approximation preserves the monotonicity and continuity
properties. Simulations show that the proposed method exhibits better
robustness and accuracy for small training sets in comparison with standard
approximation techniques for multivariate data.Comment: Shorter version accepted at ICASSP 201
Risk Bounds for Learning Multiple Components with Permutation-Invariant Losses
This paper proposes a simple approach to derive efficient error bounds for
learning multiple components with sparsity-inducing regularization. We show
that for such regularization schemes, known decompositions of the Rademacher
complexity over the components can be used in a more efficient manner to result
in tighter bounds without too much effort. We give examples of application to
switching regression and center-based clustering/vector quantization. Then, the
complete workflow is illustrated on the problem of subspace clustering, for
which decomposition results were not previously available. For all these
problems, the proposed approach yields risk bounds with mild dependencies on
the number of components and completely removes this dependence for nonconvex
regularization schemes that could not be handled by previous methods
Asymptotic Generalization Bound of Fisher's Linear Discriminant Analysis
Fisher's linear discriminant analysis (FLDA) is an important dimension
reduction method in statistical pattern recognition. It has been shown that
FLDA is asymptotically Bayes optimal under the homoscedastic Gaussian
assumption. However, this classical result has the following two major
limitations: 1) it holds only for a fixed dimensionality , and thus does not
apply when and the training sample size are proportionally large; 2) it
does not provide a quantitative description on how the generalization ability
of FLDA is affected by and . In this paper, we present an asymptotic
generalization analysis of FLDA based on random matrix theory, in a setting
where both and increase and . The
obtained lower bound of the generalization discrimination power overcomes both
limitations of the classical result, i.e., it is applicable when and
are proportionally large and provides a quantitative description of the
generalization ability of FLDA in terms of the ratio and the
population discrimination power. Besides, the discrimination power bound also
leads to an upper bound on the generalization error of binary-classification
with FLDA
- …