13,132 research outputs found

    Robust Tensor Analysis with Non-Greedy L1-Norm Maximization

    Get PDF
    The L1-norm based tensor analysis (TPCA-L1) is recently proposed for dimensionality reduction and feature extraction. However, a greedy strategy was utilized for solving the L1-norm maximization problem, which makes it prone to being stuck in local solutions. In this paper, we propose a robust TPCA with non-greedy L1-norm maximization (TPCA-L1 non-greedy), in which all projection directions are optimized simultaneously. Experiments on several face databases demonstrate the effectiveness of the proposed method

    Bayesian Robust Tensor Factorization for Incomplete Multiway Data

    Full text link
    We propose a generative model for robust tensor factorization in the presence of both missing data and outliers. The objective is to explicitly infer the underlying low-CP-rank tensor capturing the global information and a sparse tensor capturing the local information (also considered as outliers), thus providing the robust predictive distribution over missing entries. The low-CP-rank tensor is modeled by multilinear interactions between multiple latent factors on which the column sparsity is enforced by a hierarchical prior, while the sparse tensor is modeled by a hierarchical view of Student-tt distribution that associates an individual hyperparameter with each element independently. For model learning, we develop an efficient closed-form variational inference under a fully Bayesian treatment, which can effectively prevent the overfitting problem and scales linearly with data size. In contrast to existing related works, our method can perform model selection automatically and implicitly without need of tuning parameters. More specifically, it can discover the groundtruth of CP rank and automatically adapt the sparsity inducing priors to various types of outliers. In addition, the tradeoff between the low-rank approximation and the sparse representation can be optimized in the sense of maximum model evidence. The extensive experiments and comparisons with many state-of-the-art algorithms on both synthetic and real-world datasets demonstrate the superiorities of our method from several perspectives.Comment: in IEEE Transactions on Neural Networks and Learning Systems, 201
    corecore