7,813 research outputs found

    Infinitely divisible nonnegative matrices, MM-matrices, and the embedding problem for finite state stationary Markov Chains

    Full text link
    This paper explicitly details the relation between MM-matrices, nonnegative roots of nonnegative matrices, and the embedding problem for finite-state stationary Markov chains. The set of nonsingular nonnegative matrices with arbitrary nonnegative roots is shown to be the closure of the set of matrices with matrix roots in IM\mathcal{IM}. The methods presented here employ nothing beyond basic matrix analysis, however it answers a question regarding MM-matrices posed over 30 years ago and as an application, a new characterization of the set of all embeddable stochastic matrices is obtained as a corollary

    A note on the singular value decomposition of (skew-)involutory and (skew-)coninvolutory matrices

    Full text link
    The singular values σ>1\sigma >1 of an n×nn \times n involutory matrix AA appear in pairs (σ,1σ),(\sigma, \frac{1}{\sigma}), while the singular values σ=1\sigma = 1 may appear in pairs (1,1)(1,1) or by themselves. The left and right singular vectors of pairs of singular values are closely connected. This link is used to reformulate the singular value decomposition (SVD) of an involutory matrix as an eigendecomposition. This displays an interesting relation between the singular values of an involutory matrix and its eigenvalues. Similar observations hold for the SVD, the singular values and the coneigenvalues of (skew-)coninvolutory matrices

    Theoretic Analysis and Extremely Easy Algorithms for Domain Adaptive Feature Learning

    Full text link
    Domain adaptation problems arise in a variety of applications, where a training dataset from the \textit{source} domain and a test dataset from the \textit{target} domain typically follow different distributions. The primary difficulty in designing effective learning models to solve such problems lies in how to bridge the gap between the source and target distributions. In this paper, we provide comprehensive analysis of feature learning algorithms used in conjunction with linear classifiers for domain adaptation. Our analysis shows that in order to achieve good adaptation performance, the second moments of the source domain distribution and target domain distribution should be similar. Based on our new analysis, a novel extremely easy feature learning algorithm for domain adaptation is proposed. Furthermore, our algorithm is extended by leveraging multiple layers, leading to a deep linear model. We evaluate the effectiveness of the proposed algorithms in terms of domain adaptation tasks on the Amazon review dataset and the spam dataset from the ECML/PKDD 2006 discovery challenge.Comment: ijca
    • …
    corecore