23,823 research outputs found
Automated Spectral Kernel Learning
The generalization performance of kernel methods is largely determined by the
kernel, but common kernels are stationary thus input-independent and
output-independent, that limits their applications on complicated tasks. In
this paper, we propose a powerful and efficient spectral kernel learning
framework and learned kernels are dependent on both inputs and outputs, by
using non-stationary spectral kernels and flexibly learning the spectral
measure from the data. Further, we derive a data-dependent generalization error
bound based on Rademacher complexity, which estimates the generalization
ability of the learning framework and suggests two regularization terms to
improve performance. Extensive experimental results validate the effectiveness
of the proposed algorithm and confirm our theoretical results.Comment: Publised in AAAI 202
Recommended from our members
Performance Comparison of Knowledge-Based Dose Prediction Techniques Based on Limited Patient Data.
PurposeThe accuracy of dose prediction is essential for knowledge-based planning and automated planning techniques. We compare the dose prediction accuracy of 3 prediction methods including statistical voxel dose learning, spectral regression, and support vector regression based on limited patient training data.MethodsStatistical voxel dose learning, spectral regression, and support vector regression were used to predict the dose of noncoplanar intensity-modulated radiation therapy (4Ï) and volumetric-modulated arc therapy head and neck, 4Ï lung, and volumetric-modulated arc therapy prostate plans. Twenty cases of each site were used for k-fold cross-validation, with k = 4. Statistical voxel dose learning bins voxels according to their Euclidean distance to the planning target volume and uses the median to predict the dose of new voxels. Distance to the planning target volume, polynomial combinations of the distance components, planning target volume, and organ at risk volume were used as features for spectral regression and support vector regression. A total of 28 features were included. Principal component analysis was performed on the input features to test the effect of dimension reduction. For the coplanar volumetric-modulated arc therapy plans, separate models were trained for voxels within the same axial slice as planning target volume voxels and voxels outside the primary beam. The effect of training separate models for each organ at risk compared to all voxels collectively was also tested. The mean squared error was calculated to evaluate the voxel dose prediction accuracy.ResultsStatistical voxel dose learning using separate models for each organ at risk had the lowest root mean squared error for all sites and modalities: 3.91 Gy (head and neck 4Ï), 3.21 Gy (head and neck volumetric-modulated arc therapy), 2.49 Gy (lung 4Ï), and 2.35 Gy (prostate volumetric-modulated arc therapy). Compared to using the original features, principal component analysis reduced the 4Ï prediction error for head and neck spectral regression (-43.9%) and support vector regression (-42.8%) and lung support vector regression (-24.4%) predictions. Principal component analysis was more effective in using all/most of the possible principal components. Separate organ at risk models were more accurate than training on all organ at risk voxels in all cases.ConclusionCompared with more sophisticated parametric machine learning methods with dimension reduction, statistical voxel dose learning is more robust to patient variability and provides the most accurate dose prediction method
Zero Shot Learning with the Isoperimetric Loss
We introduce the isoperimetric loss as a regularization criterion for
learning the map from a visual representation to a semantic embedding, to be
used to transfer knowledge to unknown classes in a zero-shot learning setting.
We use a pre-trained deep neural network model as a visual representation of
image data, a Word2Vec embedding of class labels, and linear maps between the
visual and semantic embedding spaces. However, the spaces themselves are not
linear, and we postulate the sample embedding to be populated by noisy samples
near otherwise smooth manifolds. We exploit the graph structure defined by the
sample points to regularize the estimates of the manifolds by inferring the
graph connectivity using a generalization of the isoperimetric inequalities
from Riemannian geometry to graphs. Surprisingly, this regularization alone,
paired with the simplest baseline model, outperforms the state-of-the-art among
fully automated methods in zero-shot learning benchmarks such as AwA and CUB.
This improvement is achieved solely by learning the structure of the underlying
spaces by imposing regularity.Comment: Accepted to AAAI-2
Period Estimation in Astronomical Time Series Using Slotted Correntropy
In this letter, we propose a method for period estimation in light curves
from periodic variable stars using correntropy. Light curves are astronomical
time series of stellar brightness over time, and are characterized as being
noisy and unevenly sampled. We propose to use slotted time lags in order to
estimate correntropy directly from irregularly sampled time series. A new
information theoretic metric is proposed for discriminating among the peaks of
the correntropy spectral density. The slotted correntropy method outperformed
slotted correlation, string length, VarTools (Lomb-Scargle periodogram and
Analysis of Variance), and SigSpec applications on a set of light curves drawn
from the MACHO survey
Nonlinear unmixing of hyperspectral images: Models and algorithms
When considering the problem of unmixing hyperspectral images, most of the literature in the geoscience and image processing areas relies on the widely used linear mixing model (LMM). However, the LMM may be not valid, and other nonlinear models need to be considered, for instance, when there are multiscattering effects or intimate interactions. Consequently, over the last few years, several significant contributions have been proposed to overcome the limitations inherent in the LMM. In this article, we present an overview of recent advances in nonlinear unmixing modeling
Automated reliability assessment for spectroscopic redshift measurements
We present a new approach to automate the spectroscopic redshift reliability
assessment based on machine learning (ML) and characteristics of the redshift
probability density function (PDF).
We propose to rephrase the spectroscopic redshift estimation into a Bayesian
framework, in order to incorporate all sources of information and uncertainties
related to the redshift estimation process, and produce a redshift posterior
PDF that will be the starting-point for ML algorithms to provide an automated
assessment of a redshift reliability.
As a use case, public data from the VIMOS VLT Deep Survey is exploited to
present and test this new methodology. We first tried to reproduce the existing
reliability flags using supervised classification to describe different types
of redshift PDFs, but due to the subjective definition of these flags, soon
opted for a new homogeneous partitioning of the data into distinct clusters via
unsupervised classification. After assessing the accuracy of the new clusters
via resubstitution and test predictions, unlabelled data from preliminary mock
simulations for the Euclid space mission are projected into this mapping to
predict their redshift reliability labels.Comment: Submitted on 02 June 2017 (v1). Revised on 08 September 2017 (v2).
Latest version 28 September 2017 (this version v3
- âŠ