7,814 research outputs found
Infinitely divisible nonnegative matrices, -matrices, and the embedding problem for finite state stationary Markov Chains
This paper explicitly details the relation between -matrices, nonnegative
roots of nonnegative matrices, and the embedding problem for finite-state
stationary Markov chains. The set of nonsingular nonnegative matrices with
arbitrary nonnegative roots is shown to be the closure of the set of matrices
with matrix roots in . The methods presented here employ nothing
beyond basic matrix analysis, however it answers a question regarding
-matrices posed over 30 years ago and as an application, a new
characterization of the set of all embeddable stochastic matrices is obtained
as a corollary
A note on the singular value decomposition of (skew-)involutory and (skew-)coninvolutory matrices
The singular values of an involutory matrix
appear in pairs while the singular values may appear in pairs or by themselves. The left and right singular
vectors of pairs of singular values are closely connected. This link is used to
reformulate the singular value decomposition (SVD) of an involutory matrix as
an eigendecomposition. This displays an interesting relation between the
singular values of an involutory matrix and its eigenvalues. Similar
observations hold for the SVD, the singular values and the coneigenvalues of
(skew-)coninvolutory matrices
Theoretic Analysis and Extremely Easy Algorithms for Domain Adaptive Feature Learning
Domain adaptation problems arise in a variety of applications, where a
training dataset from the \textit{source} domain and a test dataset from the
\textit{target} domain typically follow different distributions. The primary
difficulty in designing effective learning models to solve such problems lies
in how to bridge the gap between the source and target distributions. In this
paper, we provide comprehensive analysis of feature learning algorithms used in
conjunction with linear classifiers for domain adaptation. Our analysis shows
that in order to achieve good adaptation performance, the second moments of the
source domain distribution and target domain distribution should be similar.
Based on our new analysis, a novel extremely easy feature learning algorithm
for domain adaptation is proposed. Furthermore, our algorithm is extended by
leveraging multiple layers, leading to a deep linear model. We evaluate the
effectiveness of the proposed algorithms in terms of domain adaptation tasks on
the Amazon review dataset and the spam dataset from the ECML/PKDD 2006
discovery challenge.Comment: ijca
- …