1,072 research outputs found
Learning to Transform Time Series with a Few Examples
We describe a semi-supervised regression algorithm that learns to transform one time series into another time series given examples of the transformation. This algorithm is applied to tracking, where a time series of observations from sensors is transformed to a time series describing the pose of a target. Instead of defining and implementing such transformations for each tracking task separately, our algorithm learns a memoryless transformation of time series from a few example input-output mappings. The algorithm searches for a smooth function that fits the training examples and, when applied to the input time series, produces a time series that evolves according to assumed dynamics. The learning procedure is fast and lends itself to a closed-form solution. It is closely related to nonlinear system identification and manifold learning techniques. We demonstrate our algorithm on the tasks of tracking RFID tags from signal strength measurements, recovering the pose of rigid objects, deformable bodies, and articulated bodies from video sequences. For these tasks, this algorithm requires significantly fewer examples compared to fully-supervised regression algorithms or semi-supervised learning algorithms that do not take the dynamics of the output time series into account
Kernel Bayes' rule
A nonparametric kernel-based method for realizing Bayes' rule is proposed,
based on representations of probabilities in reproducing kernel Hilbert spaces.
Probabilities are uniquely characterized by the mean of the canonical map to
the RKHS. The prior and conditional probabilities are expressed in terms of
RKHS functions of an empirical sample: no explicit parametric model is needed
for these quantities. The posterior is likewise an RKHS mean of a weighted
sample. The estimator for the expectation of a function of the posterior is
derived, and rates of consistency are shown. Some representative applications
of the kernel Bayes' rule are presented, including Baysian computation without
likelihood and filtering with a nonparametric state-space model.Comment: 27 pages, 5 figure
Model selection of polynomial kernel regression
Polynomial kernel regression is one of the standard and state-of-the-art
learning strategies. However, as is well known, the choices of the degree of
polynomial kernel and the regularization parameter are still open in the realm
of model selection. The first aim of this paper is to develop a strategy to
select these parameters. On one hand, based on the worst-case learning rate
analysis, we show that the regularization term in polynomial kernel regression
is not necessary. In other words, the regularization parameter can decrease
arbitrarily fast when the degree of the polynomial kernel is suitable tuned. On
the other hand,taking account of the implementation of the algorithm, the
regularization term is required. Summarily, the effect of the regularization
term in polynomial kernel regression is only to circumvent the " ill-condition"
of the kernel matrix. Based on this, the second purpose of this paper is to
propose a new model selection strategy, and then design an efficient learning
algorithm. Both theoretical and experimental analysis show that the new
strategy outperforms the previous one. Theoretically, we prove that the new
learning strategy is almost optimal if the regression function is smooth.
Experimentally, it is shown that the new strategy can significantly reduce the
computational burden without loss of generalization capability.Comment: 29 pages, 4 figure
Eigendecompositions of Transfer Operators in Reproducing Kernel Hilbert Spaces
Transfer operators such as the Perron--Frobenius or Koopman operator play an
important role in the global analysis of complex dynamical systems. The
eigenfunctions of these operators can be used to detect metastable sets, to
project the dynamics onto the dominant slow processes, or to separate
superimposed signals. We extend transfer operator theory to reproducing kernel
Hilbert spaces and show that these operators are related to Hilbert space
representations of conditional distributions, known as conditional mean
embeddings in the machine learning community. Moreover, numerical methods to
compute empirical estimates of these embeddings are akin to data-driven methods
for the approximation of transfer operators such as extended dynamic mode
decomposition and its variants. One main benefit of the presented kernel-based
approaches is that these methods can be applied to any domain where a
similarity measure given by a kernel is available. We illustrate the results
with the aid of guiding examples and highlight potential applications in
molecular dynamics as well as video and text data analysis
- …