12,096 research outputs found
Graph Kernels
We present a unified framework to study graph kernels, special cases of which include the random
walk (GƤrtner et al., 2003; Borgwardt et al., 2005) and marginalized (Kashima et al., 2003, 2004;
MahƩ et al., 2004) graph kernels. Through reduction to a Sylvester equation we improve the time
complexity of kernel computation between unlabeled graphs with n vertices from O(n^6) to O(n^3).
We find a spectral decomposition approach even more efficient when computing entire kernel matrices.
For labeled graphs we develop conjugate gradient and fixed-point methods that take O(dn^3)
time per iteration, where d is the size of the label set. By extending the necessary linear algebra to
Reproducing Kernel Hilbert Spaces (RKHS) we obtain the same result for d-dimensional edge kernels,
and O(n^4) in the infinite-dimensional case; on sparse graphs these algorithms only take O(n^2)
time per iteration in all cases. Experiments on graphs from bioinformatics and other application
domains show that these techniques can speed up computation of the kernel by an order of magnitude
or more. We also show that certain rational kernels (Cortes et al., 2002, 2003, 2004) when
specialized to graphs reduce to our random walk graph kernel. Finally, we relate our framework to
R-convolution kernels (Haussler, 1999) and provide a kernel that is close to the optimal assignment
kernel of Frƶhlich et al. (2006) yet provably positive semi-definite
Common-Resolution Convolution Kernels for Space- and Ground-Based Telescopes
Multi-wavelength study of extended astronomical objects requires combining
images from instruments with differing point spread functions (PSFs). We
describe the construction of convolution kernels that allow one to generate
(multi-wavelength) images with a common PSF, thus preserving the colors of the
astronomical sources. We generate convolution kernels for the cameras of the
Spitzer Space Telescope, Herschel Space Observatory, Galaxy Evolution Explorer
(GALEX), Wide-field Infrared Survey Explorer (WISE), ground-based optical
telescopes (Moffat functions and sum of Gaussians), and Gaussian PSFs. These
kernels allow the study of the Spectral Energy Distribution (SED) of extended
objects, preserving the characteristic SED in each pixel. The convolution
kernels and the IDL packages used to construct and use them are made publicly
available
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Efficient SDP Inference for Fully-connected CRFs Based on Low-rank Decomposition
Conditional Random Fields (CRF) have been widely used in a variety of
computer vision tasks. Conventional CRFs typically define edges on neighboring
image pixels, resulting in a sparse graph such that efficient inference can be
performed. However, these CRFs fail to model long-range contextual
relationships. Fully-connected CRFs have thus been proposed. While there are
efficient approximate inference methods for such CRFs, usually they are
sensitive to initialization and make strong assumptions. In this work, we
develop an efficient, yet general algorithm for inference on fully-connected
CRFs. The algorithm is based on a scalable SDP algorithm and the low- rank
approximation of the similarity/kernel matrix. The core of the proposed
algorithm is a tailored quasi-Newton method that takes advantage of the
low-rank matrix approximation when solving the specialized SDP dual problem.
Experiments demonstrate that our method can be applied on fully-connected CRFs
that cannot be solved previously, such as pixel-level image co-segmentation.Comment: 15 pages. A conference version of this work appears in Proc. IEEE
Conference on Computer Vision and Pattern Recognition, 201
- ā¦