11,678 research outputs found
Learning curves for Gaussian process regression: Approximations and bounds
We consider the problem of calculating learning curves (i.e., average
generalization performance) of Gaussian processes used for regression. On the
basis of a simple expression for the generalization error, in terms of the
eigenvalue decomposition of the covariance function, we derive a number of
approximation schemes. We identify where these become exact, and compare with
existing bounds on learning curves; the new approximations, which can be used
for any input space dimension, generally get substantially closer to the truth.
We also study possible improvements to our approximations. Finally, we use a
simple exactly solvable learning scenario to show that there are limits of
principle on the quality of approximations and bounds expressible solely in
terms of the eigenvalue spectrum of the covariance function.Comment: 25 pages, 10 figure
Robust Linear Spectral Unmixing using Anomaly Detection
This paper presents a Bayesian algorithm for linear spectral unmixing of
hyperspectral images that accounts for anomalies present in the data. The model
proposed assumes that the pixel reflectances are linear mixtures of unknown
endmembers, corrupted by an additional nonlinear term modelling anomalies and
additive Gaussian noise. A Markov random field is used for anomaly detection
based on the spatial and spectral structures of the anomalies. This allows
outliers to be identified in particular regions and wavelengths of the data
cube. A Bayesian algorithm is proposed to estimate the parameters involved in
the model yielding a joint linear unmixing and anomaly detection algorithm.
Simulations conducted with synthetic and real hyperspectral images demonstrate
the accuracy of the proposed unmixing and outlier detection strategy for the
analysis of hyperspectral images
- …