45 research outputs found

    2-manifold recognition is in logspace

    Get PDF
    We prove that the homeomorphism problem for 2 manifolds can be decided in logspace. The proof relies on Reingold's logspace solution to the undirected s, t-connectivity problem in graphs

    2-manifold recognition is in logspace

    Full text link
    We prove that the homeomorphism problem for 2-manifolds can be decided in logspace. The proof relies on Reingold's logspace solution to the undirected s,ts,t-connectivity problem in graphs

    R3MC: A Riemannian three-factor algorithm for low-rank matrix completion

    Full text link
    We exploit the versatile framework of Riemannian optimization on quotient manifolds to develop R3MC, a nonlinear conjugate-gradient method for low-rank matrix completion. The underlying search space of fixed-rank matrices is endowed with a novel Riemannian metric that is tailored to the least-squares cost. Numerical comparisons suggest that R3MC robustly outperforms state-of-the-art algorithms across different problem instances, especially those that combine scarcely sampled and ill-conditioned data.Comment: Accepted for publication in the proceedings of the 53rd IEEE Conference on Decision and Control, 201

    G\mathcal{G}-softmax: Improving Intra-class Compactness and Inter-class Separability of Features

    Full text link
    Intra-class compactness and inter-class separability are crucial indicators to measure the effectiveness of a model to produce discriminative features, where intra-class compactness indicates how close the features with the same label are to each other and inter-class separability indicates how far away the features with different labels are. In this work, we investigate intra-class compactness and inter-class separability of features learned by convolutional networks and propose a Gaussian-based softmax (G\mathcal{G}-softmax) function that can effectively improve intra-class compactness and inter-class separability. The proposed function is simple to implement and can easily replace the softmax function. We evaluate the proposed G\mathcal{G}-softmax function on classification datasets (i.e., CIFAR-10, CIFAR-100, and Tiny ImageNet) and on multi-label classification datasets (i.e., MS COCO and NUS-WIDE). The experimental results show that the proposed G\mathcal{G}-softmax function improves the state-of-the-art models across all evaluated datasets. In addition, analysis of the intra-class compactness and inter-class separability demonstrates the advantages of the proposed function over the softmax function, which is consistent with the performance improvement. More importantly, we observe that high intra-class compactness and inter-class separability are linearly correlated to average precision on MS COCO and NUS-WIDE. This implies that improvement of intra-class compactness and inter-class separability would lead to improvement of average precision.Comment: 15 pages, published in TNNL
    corecore