171 research outputs found

    G\mathcal{G}-softmax: Improving Intra-class Compactness and Inter-class Separability of Features

    Full text link
    Intra-class compactness and inter-class separability are crucial indicators to measure the effectiveness of a model to produce discriminative features, where intra-class compactness indicates how close the features with the same label are to each other and inter-class separability indicates how far away the features with different labels are. In this work, we investigate intra-class compactness and inter-class separability of features learned by convolutional networks and propose a Gaussian-based softmax (G\mathcal{G}-softmax) function that can effectively improve intra-class compactness and inter-class separability. The proposed function is simple to implement and can easily replace the softmax function. We evaluate the proposed G\mathcal{G}-softmax function on classification datasets (i.e., CIFAR-10, CIFAR-100, and Tiny ImageNet) and on multi-label classification datasets (i.e., MS COCO and NUS-WIDE). The experimental results show that the proposed G\mathcal{G}-softmax function improves the state-of-the-art models across all evaluated datasets. In addition, analysis of the intra-class compactness and inter-class separability demonstrates the advantages of the proposed function over the softmax function, which is consistent with the performance improvement. More importantly, we observe that high intra-class compactness and inter-class separability are linearly correlated to average precision on MS COCO and NUS-WIDE. This implies that improvement of intra-class compactness and inter-class separability would lead to improvement of average precision.Comment: 15 pages, published in TNNL

    Heavy surface state in a possible topological Kondo insulator: Magneto-thermoelectric transport on the (011)-plane of SmB6_6

    Full text link
    Motivated by the high sensitivity to Fermi surface topology and scattering mechanisms in magneto-thermoelectric transport, we have measured the thermopower and Nernst effect on the (011)-plane of the proposed topological Kondo insulator SmB6_6. These experiments, together with electrical resistivity and Hall effect measurements, demonstrate that the (011)-plane also harbors a metallic surface with the effective mass in the order of 10-102^2 m0m_0. The surface and bulk conductances are well distinguished in these measurements and are categorized into metallic and non-degenerate semiconducting regimes, respectively. Electronic correlations play an important role in enhancing scattering and also contribute to the heavy surface state.Comment: 4 figures, 1 tabl

    Myofibrillar protein gel properties are influenced by oxygen concentration in modified atmosphere packaged minced beef

    Get PDF
    Minced beef was stored for 8 days and myofibrillar protein (MP) was extracted to investigate the effect of oxygen concentration (0, 20, 40, 60, and 80%) in modified atmosphere packaging (MAP) on heat-induced gel properties. Compression force of gels was lower when prepared from beef packaged in 0% oxygen, intermediate in 20 to 60% oxygen and greater in 80% oxygen. Total water loss of gels prepared from beef packaged with oxygen (20-80%) was higher and rheology measurements presented higher G' and G '' values. Additionally, gels from beef packaged without oxygen exhibited higher J (t) values during creep and recovery tests, demonstrating that oxygen exposure of meat during storage in MAP affect MP in such a way that heat-induced protein gels alter their characteristics. Generally, storage with oxygen in MAP resulted in stronger and more elastic MP gels, which was observed already at a relative low oxygen concentration of 20%. (C) 2017 Elsevier Ltd. All rights reserved.Peer reviewe

    High-Field Shubnikov-de Haas Oscillations in the Topological Insulator Bi2_2Te2_2Se

    Full text link
    We report measurements of the surface Shubnikov de Haas oscillations (SdH) on crystals of the topological insulator Bi2_2Te2_2Se. In crystals with large bulk resistivity (\sim4 Ω\Omegacm at 4 K), we observe \sim15 surface SdH oscillations (to the nn = 1 Landau Level) in magnetic fields BB up to 45 Tesla. Extrapolating to the limit 1/B01/B\to 0, we confirm the 12\frac12-shift expected from a Dirac spectrum. The results are consistent with a very small surface Lande gg-factor.Comment: Text expanded, slight changes in text, final version; Total 6 pages, 6 figure

    Learning to Predict Gradients for Semi-Supervised Continual Learning

    Full text link
    A key challenge for machine intelligence is to learn new visual concepts without forgetting the previously acquired knowledge. Continual learning is aimed towards addressing this challenge. However, there is a gap between existing supervised continual learning and human-like intelligence, where human is able to learn from both labeled and unlabeled data. How unlabeled data affects learning and catastrophic forgetting in the continual learning process remains unknown. To explore these issues, we formulate a new semi-supervised continual learning method, which can be generically applied to existing continual learning models. Specifically, a novel gradient learner learns from labeled data to predict gradients on unlabeled data. Hence, the unlabeled data could fit into the supervised continual learning method. Different from conventional semi-supervised settings, we do not hypothesize that the underlying classes, which are associated to the unlabeled data, are known to the learning process. In other words, the unlabeled data could be very distinct from the labeled data. We evaluate the proposed method on mainstream continual learning, adversarial continual learning, and semi-supervised learning tasks. The proposed method achieves state-of-the-art performance on classification accuracy and backward transfer in the continual learning setting while achieving desired performance on classification accuracy in the semi-supervised learning setting. This implies that the unlabeled images can enhance the generalizability of continual learning models on the predictive ability on unseen data and significantly alleviate catastrophic forgetting. The code is available at \url{https://github.com/luoyan407/grad_prediction.git}.Comment: Accepted by IEEE Transactions on Neural Networks and Learning Systems (TNNLS
    corecore