48 research outputs found

    How does Disagreement Help Generalization against Label Corruption?

    Full text link
    Learning with noisy labels is one of the hottest problems in weakly-supervised learning. Based on memorization effects of deep neural networks, training on small-loss instances becomes very promising for handling noisy labels. This fosters the state-of-the-art approach "Co-teaching" that cross-trains two deep neural networks using the small-loss trick. However, with the increase of epochs, two networks converge to a consensus and Co-teaching reduces to the self-training MentorNet. To tackle this issue, we propose a robust learning paradigm called Co-teaching+, which bridges the "Update by Disagreement" strategy with the original Co-teaching. First, two networks feed forward and predict all data, but keep prediction disagreement data only. Then, among such disagreement data, each network selects its small-loss data, but back propagates the small-loss data from its peer network and updates its own parameters. Empirical results on benchmark datasets demonstrate that Co-teaching+ is much superior to many state-of-the-art methods in the robustness of trained models

    Mode-Conversion-Based Chirped Bragg Gratings on Thin-Film Lithium Niobate

    No full text
    In this work, we propose a mode-conversion-based chirped Bragg grating on thin-film lithium niobate (TFLN). The device is mainly composed of a 4.7-mm long chirped asymmetric Bragg grating and an adiabatic directional coupler (ADC). The mode conversion introduced by the ADC allows the chirped Bragg grating operates in reflection without using an off-chip circulator. The proposed device has experimentally achieved a total time delay of 73.4 ps over an operating bandwidth of 15 nm. This mode-conversion-based chirped Bragg grating shows excellent compatibility with other devices on TFLN, making it suitable in monolithically integrated microwave photonics, sensing, and optical communication systems

    A Silicon-Based On-Chip 64-Channel Hybrid Wavelength- and Mode-Division (de)Multiplexer

    No full text
    An on-chip 64-channel hybrid (de)multiplexer for wavelength-division multiplexing (WDM) and mode-division multiplexing (MDM) is designed and demonstrated on a 220 nm SOI platform for the demands of large capacity optical interconnections. The designed hybrid (de)multiplexer includes a 4-channel mode (de)multiplexer and 16-channel wavelength-division (de)multiplexers. The mode (de)multiplexer is comprised of cascaded asymmetric directional couplers supporting coupling between fundamental TE mode and higher-order modes with low crosstalks in a wide wavelength range. The wavelength-division (de)multiplexers consist of two bi-directional micro-ring resonator arrays for four 16-channel WDM signals. Micro-heaters are placed on the micro-resonators for thermal tuning. According to the experimental results, the excess loss is <3.9 dB in one free spectral range from 1522 nm to 1552 nm and <5.6 dB in three free spectral ranges from 1493 nm to 1583 nm. The intermode crosstalks are −23.2 dB to −33.2 dB, and the isolations between adjacent and nonadjacent wavelength channels are about −17.1 dB and −22.3 dB, respectively. The thermal tuning efficiency is ∼2.22 mW/nm over one free spectral range

    Two-Dimensional Elliptical Microresonator Arrays for Wide Flat Bandwidth and Boxlike Filter Response

    No full text
    Based on two-dimensional elliptical microresonator arrays, we designed and fabricated a compact filter on the silicon-on-insulator platform with potential applications for on-chip optical interconnects. The fabricated optical filter exhibits a wide flat bandwidth of 951 GHz with the shape factor of 0.57 at the through port for the 3×20 arrays. The out-of-band rejection is as high as 50 dB. The crosstalk is also very low (−46 dB). The spectral shows a boxlike response. Although there are sixty rings used in the array, the insertion loss is still very small (≤1.36 dB)

    Oil Spill Segmentation via Adversarial ff -Divergence Learning

    No full text
    corecore