11 research outputs found

    Low-Rank Tensor Completion Based on Self-Adaptive Learnable Transforms

    Get PDF
    The tensor nuclear norm (TNN), defined as the sum of nuclear norms of frontal slices of the tensor in a frequency domain, has been found useful in solving low-rank tensor recovery problems. Existing TNN-based methods use either fixed or data-independent transformations, which may not be the optimal choices for the given tensors. As the consequence, these methods cannot exploit the potential low-rank structure of tensor data adaptively. In this article, we propose a framework called self-adaptive learnable transform (SALT) to learn a transformation matrix from the given tensor. Specifically, SALT aims to learn a lossless transformation that induces a lower average-rank tensor, where the Schatten- p quasi-norm is used as the rank proxy. Then, because SALT is less sensitive to the orientation, we generalize SALT to other dimensions of tensor (SALTS), namely, learning three self-adaptive transformation matrices simultaneously from given tensor. SALTS is able to adaptively exploit the potential low-rank structures in all directions. We provide a unified optimization framework based on alternating direction multiplier method for SALTS model and theoretically prove the weak convergence property of the proposed algorithm. Experimental results in hyperspectral image (HSI), color video, magnetic resonance imaging (MRI), and COIL-20 datasets show that SALTS is much more accurate in tensor completion than existing methods. The demo code can be found at https://faculty.uestc.edu.cn/gaobin/zh_ CN/lwcg/153392/list/index.htm

    Transformed Low-Rank Parameterization Can Help Robust Generalization for Tensor Neural Networks

    Full text link
    Achieving efficient and robust multi-channel data learning is a challenging task in data science. By exploiting low-rankness in the transformed domain, i.e., transformed low-rankness, tensor Singular Value Decomposition (t-SVD) has achieved extensive success in multi-channel data representation and has recently been extended to function representation such as Neural Networks with t-product layers (t-NNs). However, it still remains unclear how t-SVD theoretically affects the learning behavior of t-NNs. This paper is the first to answer this question by deriving the upper bounds of the generalization error of both standard and adversarially trained t-NNs. It reveals that the t-NNs compressed by exact transformed low-rank parameterization can achieve a sharper adversarial generalization bound. In practice, although t-NNs rarely have exactly transformed low-rank weights, our analysis further shows that by adversarial training with gradient flow (GF), the over-parameterized t-NNs with ReLU activations are trained with implicit regularization towards transformed low-rank parameterization under certain conditions. We also establish adversarial generalization bounds for t-NNs with approximately transformed low-rank weights. Our analysis indicates that the transformed low-rank parameterization can promisingly enhance robust generalization for t-NNs.Comment: 46 pages, accepted to NeurIPS 2023. We have corrected several typos in the first version (arXiv:2303.00196

    Low-Rank Tensor Recovery with Euclidean-Norm-Induced Schatten-p Quasi-Norm Regularization

    Full text link
    The nuclear norm and Schatten-pp quasi-norm of a matrix are popular rank proxies in low-rank matrix recovery. Unfortunately, computing the nuclear norm or Schatten-pp quasi-norm of a tensor is NP-hard, which is a pity for low-rank tensor completion (LRTC) and tensor robust principal component analysis (TRPCA). In this paper, we propose a new class of rank regularizers based on the Euclidean norms of the CP component vectors of a tensor and show that these regularizers are monotonic transformations of tensor Schatten-pp quasi-norm. This connection enables us to minimize the Schatten-pp quasi-norm in LRTC and TRPCA implicitly. The methods do not use the singular value decomposition and hence scale to big tensors. Moreover, the methods are not sensitive to the choice of initial rank and provide an arbitrarily sharper rank proxy for low-rank tensor recovery compared to nuclear norm. We provide theoretical guarantees in terms of recovery error for LRTC and TRPCA, which show relatively smaller pp of Schatten-pp quasi-norm leads to tighter error bounds. Experiments using LRTC and TRPCA on synthetic data and natural images verify the effectiveness and superiority of our methods compared to baseline methods

    STRUCTURED SPARSITY DRIVEN LEARNING: THEORY AND ALGORITHMS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Robust Tensor Decomposition via Orientation Invariant Tubal Nuclear Norms

    No full text
    Low-rank tensor recovery has been widely applied to computer vision and machine learning. Recently, tubal nuclear norm (TNN) based optimization is proposed with superior performance as compared to other tensor nuclear norms. However, one major limitation is its orientation sensitivity due to low-rankness strictly defined along tubal orientation and it cannot simultaneously model spectral low-rankness in multiple orientations. To this end, we introduce two new tensor norms called OITNN-O and OITNN-L to exploit multi-orientational spectral low-rankness for an arbitrary K-way (K ≥ 3) tensors. We further formulate two robust tensor decomposition models via the proposed norms and develop two algorithms as the solutions. Theoretically, we establish non-asymptotic error bounds which can predict the scaling behavior of the estimation error. Experiments on real-world datasets demonstrate the superiority and effectiveness of the proposed norms

    Robust Tensor Decomposition via Orientation Invariant Tubal Nuclear Norms

    No full text

    Twin Research for Everyone. From Biology to Health, Epigenetics, and Psychology

    Get PDF
    corecore