484 research outputs found

    Intelligent optical performance monitor using multi-task learning based artificial neural network

    Full text link
    An intelligent optical performance monitor using multi-task learning based artificial neural network (MTL-ANN) is designed for simultaneous OSNR monitoring and modulation format identification (MFI). Signals' amplitude histograms (AHs) after constant module algorithm are selected as the input features for MTL-ANN. The experimental results of 20-Gbaud NRZ-OOK, PAM4 and PAM8 signals demonstrate that MTL-ANN could achieve OSNR monitoring and MFI simultaneously with higher accuracy and stability compared with single-task learning based ANNs (STL-ANNs). The results show an MFI accuracy of 100% and OSNR monitoring root-mean-square error of 0.63 dB for the three modulation formats under consideration. Furthermore, the number of neuron needed for the single MTL-ANN is almost the half of STL-ANN, which enables reduced-complexity optical performance monitoring devices for real-time performance monitoring

    SEAL: Simultaneous Label Hierarchy Exploration And Learning

    Full text link
    Label hierarchy is an important source of external knowledge that can enhance classification performance. However, most existing methods rely on predefined label hierarchies that may not match the data distribution. To address this issue, we propose Simultaneous label hierarchy Exploration And Learning (SEAL), a new framework that explores the label hierarchy by augmenting the observed labels with latent labels that follow a prior hierarchical structure. Our approach uses a 1-Wasserstein metric over the tree metric space as an objective function, which enables us to simultaneously learn a data-driven label hierarchy and perform (semi-)supervised learning. We evaluate our method on several datasets and show that it achieves superior results in both supervised and semi-supervised scenarios and reveals insightful label structures. Our implementation is available at https://github.com/tzq1999/SEAL

    Changes in Homogalacturonans, Polygalacturonase Activities, and Cell Wall Linked Proteins During Cotton Cotyledon Expansion

    Get PDF
    Biochemistry and Molecular Biolog

    Kernel-SSL: Kernel KL Divergence for Self-Supervised Learning

    Full text link
    Contrastive learning usually compares one positive anchor sample with lots of negative samples to perform Self-Supervised Learning (SSL). Alternatively, non-contrastive learning, as exemplified by methods like BYOL, SimSiam, and Barlow Twins, accomplishes SSL without the explicit use of negative samples. Inspired by the existing analysis for contrastive learning, we provide a reproducing kernel Hilbert space (RKHS) understanding of many existing non-contrastive learning methods. Subsequently, we propose a novel loss function, Kernel-SSL, which directly optimizes the mean embedding and the covariance operator within the RKHS. In experiments, our method Kernel-SSL outperforms state-of-the-art methods by a large margin on ImageNet datasets under the linear evaluation settings. Specifically, when performing 100 epochs pre-training, our method outperforms SimCLR by 4.6%

    Contrastive Learning Is Spectral Clustering On Similarity Graph

    Full text link
    Contrastive learning is a powerful self-supervised learning method, but we have a limited theoretical understanding of how it works and why it works. In this paper, we prove that contrastive learning with the standard InfoNCE loss is equivalent to spectral clustering on the similarity graph. Using this equivalence as the building block, we extend our analysis to the CLIP model and rigorously characterize how similar multi-modal objects are embedded together. Motivated by our theoretical insights, we introduce the kernel mixture loss, incorporating novel kernel functions that outperform the standard Gaussian kernel on several vision datasets.Comment: We express our gratitude to the anonymous reviewers for their valuable feedbac

    RelationMatch: Matching In-batch Relationships for Semi-supervised Learning

    Full text link
    Semi-supervised learning has achieved notable success by leveraging very few labeled data and exploiting the wealth of information derived from unlabeled data. However, existing algorithms usually focus on aligning predictions on paired data points augmented from an identical source, and overlook the inter-point relationships within each batch. This paper introduces a novel method, RelationMatch, which exploits in-batch relationships with a matrix cross-entropy (MCE) loss function. Through the application of MCE, our proposed method consistently surpasses the performance of established state-of-the-art methods, such as FixMatch and FlexMatch, across a variety of vision datasets. Notably, we observed a substantial enhancement of 15.21% in accuracy over FlexMatch on the STL-10 dataset using only 40 labels. Moreover, we apply MCE to supervised learning scenarios, and observe consistent improvements as well
    • …
    corecore