14 research outputs found

    The low-rank decomposition of correlation-enhanced superpixels for video segmentation

    Get PDF
    Low-rank decomposition (LRD) is an effective scheme to explore the affinity among superpixels in the image and video segmentation. However, the superpixel feature collected based on colour, shape, and texture may be rough, incompatible, and even conflicting if multiple features extracted in various manners are vectored and stacked straight together. It poses poor correlation, inconsistence on intra-category superpixels, and similarities on inter-category superpixels. This paper proposes a correlation-enhanced superpixel for video segmentation in the framework of LRD. Our algorithm mainly consists of two steps, feature analysis to establish the initial affinity among superpixels, followed by construction of a correlation-enhanced superpixel. This work is very helpful to perform LRD effectively and find the affinity accurately and quickly. Experiments conducted on datasets validate the proposed method. Comparisons with the state-of-the-art algorithms show higher speed and more precise in video segmentation

    A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank

    Full text link
    Rank minimization has attracted a lot of attention due to its robustness in data recovery. To overcome the computational difficulty, rank is often replaced with nuclear norm. For several rank minimization problems, such a replacement has been theoretically proven to be valid, i.e., the solution to nuclear norm minimization problem is also the solution to rank minimization problem. Although it is easy to believe that such a replacement may not always be valid, no concrete example has ever been found. We argue that such a validity checking cannot be done by numerical computation and show, by analyzing the noiseless latent low rank representation (LatLRR) model, that even for very simple rank minimization problems the validity may still break down. As a by-product, we find that the solution to the nuclear norm minimization formulation of LatLRR is non-unique. Hence the results of LatLRR reported in the literature may be questionable.Comment: accepted by ECML PKDD201

    Fearless Luminance Adaptation: A Macro-Micro-Hierarchical Transformer for Exposure Correction

    Full text link
    Photographs taken with less-than-ideal exposure settings often display poor visual quality. Since the correction procedures vary significantly, it is difficult for a single neural network to handle all exposure problems. Moreover, the inherent limitations of convolutions, hinder the models ability to restore faithful color or details on extremely over-/under- exposed regions. To overcome these limitations, we propose a Macro-Micro-Hierarchical transformer, which consists of a macro attention to capture long-range dependencies, a micro attention to extract local features, and a hierarchical structure for coarse-to-fine correction. In specific, the complementary macro-micro attention designs enhance locality while allowing global interactions. The hierarchical structure enables the network to correct exposure errors of different scales layer by layer. Furthermore, we propose a contrast constraint and couple it seamlessly in the loss function, where the corrected image is pulled towards the positive sample and pushed away from the dynamically generated negative samples. Thus the remaining color distortion and loss of detail can be removed. We also extend our method as an image enhancer for low-light face recognition and low-light semantic segmentation. Experiments demonstrate that our approach obtains more attractive results than state-of-the-art methods quantitatively and qualitatively.Comment: Accepted by ACM MM 202

    Completing Low-Rank Matrices with Corrupted Samples from Few Coefficients in General Basis

    Full text link
    Subspace recovery from corrupted and missing data is crucial for various applications in signal processing and information theory. To complete missing values and detect column corruptions, existing robust Matrix Completion (MC) methods mostly concentrate on recovering a low-rank matrix from few corrupted coefficients w.r.t. standard basis, which, however, does not apply to more general basis, e.g., Fourier basis. In this paper, we prove that the range space of an m×nm\times n matrix with rank rr can be exactly recovered from few coefficients w.r.t. general basis, though rr and the number of corrupted samples are both as high as O(min{m,n}/log3(m+n))O(\min\{m,n\}/\log^3 (m+n)). Our model covers previous ones as special cases, and robust MC can recover the intrinsic matrix with a higher rank. Moreover, we suggest a universal choice of the regularization parameter, which is λ=1/logn\lambda=1/\sqrt{\log n}. By our 2,1\ell_{2,1} filtering algorithm, which has theoretical guarantees, we can further reduce the computational cost of our model. As an application, we also find that the solutions to extended robust Low-Rank Representation and to our extended robust MC are mutually expressible, so both our theory and algorithm can be applied to the subspace clustering problem with missing values under certain conditions. Experiments verify our theories.Comment: To appear in IEEE Transactions on Information Theor

    Symmetric low-rank representation for subspace clustering

    Get PDF
    We propose a symmetric low-rank representation (SLRR) method for subspace clustering, which assumes that a data set is approximately drawn from the union of multiple subspaces. The proposed technique can reveal the membership of multiple subspaces through the self-expressiveness property of the data. In particular, the SLRR method considers a collaborative representation combined with low-rank matrix recovery techniques as a low-rank representation to learn a symmetric low-rank representation, which preserves the subspace structures of high-dimensional data. In contrast to performing iterative singular value decomposition in some existing low-rank representation based algorithms, the symmetric low-rank representation in the SLRR method can be calculated as a closed form solution by solving the symmetric low-rank optimization problem. By making use of the angular information of the principal directions of the symmetric low-rank representation, an affinity graph matrix is constructed for spectral clustering. Extensive experimental results show that it outperforms state-of-the-art subspace clustering algorithms.Comment: 13 page
    corecore