148 research outputs found

    Fabrication and characterizations of proton-exchanged LiNbO3 waveguides fabricated by inductively coupled plasma technique

    Get PDF
    This Letter reports the use of an inductively coupled plasma technique for fabrication of proton-exchanged (PE) LiNbO3 (LN) waveguides. Planar and stripe waveguides have been formed in Y-cut LN which are difficult to obtain with the conventional molten acid method due to the occurrence of surface damage. Secondary ion mass spectrometry, scanning electron microscopy, and infrared absorption spectrum characterization results revealed that a uniform vertical PE profile with a single low order crystal phase has been directly obtained as a result of this unique process. X-ray photoelectron spectroscopy characterization of the treated surface revealed the existence of NbO as the cause for a sometimes darkened surface and confirms the ability to completely restore the surface to LN by oxygen plasma treatment. Atomic force microscopy measurement confirms that good surface quality has been maintained after regeneration of the surface to LN

    Robust Optimal Control of Wave Energy Converters Based on Adaptive Dynamic Programming

    Get PDF

    Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup

    Full text link
    Mixup is a popular data-dependent augmentation technique for deep neural networks, which contains two sub-tasks, mixup generation, and classification. The community typically confines mixup to supervised learning (SL) and the objective of the generation sub-task is fixed to selected sample pair instead of considering the whole data manifold. To overcome such limitations, we systematically study the mixup generation objective and propose Scenario-Agnostic Mixup for both SL and Self-supervised Learning (SSL) scenarios, named SAMix. Specifically, we hypothesize and verify the objective function of mixup generation as optimizing local smoothness between two mixed classes subject to global discrimination from other classes. Therefore, we propose η\eta-balanced mixup loss for complementary learning of the two sub-objectives. Meanwhile, we parameterize the generation sub-task as a learnable sub-network, Mixer, with mixing attention which avoids trivial solutions and improves transferable abilities. To eliminate the computational cost of online training, we introduce a pre-trained version, SAMixP^\mathcal{P}, that achieves efficient performance in various tasks. Extensive experiments on SL and SSL benchmarks demonstrate that SAMix consistently outperforms leading methods.Comment: Preprint under review. 9 pages main body, 8 pages appendix, 4 pages referenc

    Decoupled Mixup for Data-efficient Learning

    Full text link
    Mixup is an efficient data augmentation approach that improves the generalization of neural networks by smoothing the decision boundary with mixed data. Recently, dynamic mixup methods have improved previous static policies effectively (e.g., linear interpolation) by maximizing salient regions or maintaining the target in mixed samples. The discrepancy is that the generated mixed samples from dynamic policies are more instance discriminative than the static ones, e.g., the foreground objects are decoupled from the background. However, optimizing mixup policies with dynamic methods in input space is an expensive computation compared to static ones. Hence, we are trying to transfer the decoupling mechanism of dynamic methods from the data level to the objective function level and propose the general decoupled mixup (DM) loss. The primary effect is that DM can adaptively focus on discriminative features without losing the original smoothness of the mixup while avoiding heavy computational overhead. As a result, DM enables static mixup methods to achieve comparable or even exceed the performance of dynamic methods. This also leads to an interesting objective design problem for mixup training that we need to focus on both smoothing the decision boundaries and identifying discriminative features. Extensive experiments on supervised and semi-supervised learning benchmarks across seven classification datasets validate the effectiveness of DM by equipping it with various mixup methods.Comment: The preprint revision, 15 pages, 6 figures. The source code is available at https://github.com/Westlake-AI/openmixu

    Leveraging Graph-based Cross-modal Information Fusion for Neural Sign Language Translation

    Full text link
    Sign Language (SL), as the mother tongue of the deaf community, is a special visual language that most hearing people cannot understand. In recent years, neural Sign Language Translation (SLT), as a possible way for bridging communication gap between the deaf and the hearing people, has attracted widespread academic attention. We found that the current mainstream end-to-end neural SLT models, which tries to learning language knowledge in a weakly supervised manner, could not mine enough semantic information under the condition of low data resources. Therefore, we propose to introduce additional word-level semantic knowledge of sign language linguistics to assist in improving current end-to-end neural SLT models. Concretely, we propose a novel neural SLT model with multi-modal feature fusion based on the dynamic graph, in which the cross-modal information, i.e. text and video, is first assembled as a dynamic graph according to their correlation, and then the graph is processed by a multi-modal graph encoder to generate the multi-modal embeddings for further usage in the subsequent neural translation models. To the best of our knowledge, we are the first to introduce graph neural networks, for fusing multi-modal information, into neural sign language translation models. Moreover, we conducted experiments on a publicly available popular SLT dataset RWTH-PHOENIX-Weather-2014T. and the quantitative experiments show that our method can improve the model

    Deep Learning Based Instance Segmentation in 3D Biomedical Images Using Weak Annotation

    Full text link
    Instance segmentation in 3D images is a fundamental task in biomedical image analysis. While deep learning models often work well for 2D instance segmentation, 3D instance segmentation still faces critical challenges, such as insufficient training data due to various annotation difficulties in 3D biomedical images. Common 3D annotation methods (e.g., full voxel annotation) incur high workloads and costs for labeling enough instances for training deep learning 3D instance segmentation models. In this paper, we propose a new weak annotation approach for training a fast deep learning 3D instance segmentation model without using full voxel mask annotation. Our approach needs only 3D bounding boxes for all instances and full voxel annotation for a small fraction of the instances, and uses a novel two-stage 3D instance segmentation model utilizing these two kinds of annotation, respectively. We evaluate our approach on several biomedical image datasets, and the experimental results show that (1) with full annotated boxes and a small amount of masks, our approach can achieve similar performance as the best known methods using full annotation, and (2) with similar annotation time, our approach outperforms the best known methods that use full annotation.Comment: Accepted by MICCAI 201

    Unveiling the Power of Mixup for Stronger Classifiers

    Full text link
    Mixup-based data augmentations have achieved great success as regularizers for deep neural networks. However, existing methods rely on deliberately handcrafted mixup policies, which ignore or oversell the semantic matching between mixed samples and labels. Driven by their prior assumptions, early methods attempt to smooth decision boundaries by random linear interpolation while others focus on maximizing class-related information via offline saliency optimization. As a result, the issue of label mismatch has not been well addressed. Additionally, the optimization stability of mixup training is constantly troubled by the label mismatch. To address these challenges, we first reformulate mixup for supervised classification as two sub-tasks, mixup sample generation and classification, then propose Automatic Mixup (AutoMix), a revolutionary mixup framework. Specifically, a learnable lightweight Mix Block (MB) with a cross-attention mechanism is proposed to generate a mixed sample by modeling a fair relationship between the pair of samples under direct supervision of the corresponding mixed label. Moreover, the proposed Momentum Pipeline (MP) enhances training stability and accelerates convergence on top of making the Mix Block fully trained end-to-end. Extensive experiments on five popular classification benchmarks show that the proposed approach consistently outperforms leading methods by a large margin.Comment: The second version of AutoMix. 12 pages, 7 figure

    Quantum Phase Diffusion in a Small Underdamped Josephson Junction

    Full text link
    Quantum phase diffusion in a small underdamped Nb/AlOx_x/Nb junction (∼\sim 0.4 μ\mum2^2) is demonstrated in a wide temperature range of 25-140 mK where macroscopic quantum tunneling (MQT) is the dominant escape mechanism. We propose a two-step transition model to describe the switching process in which the escape rate out of the potential well and the transition rate from phase diffusion to the running state are considered. The transition rate extracted from the experimental switching current distribution follows the predicted Arrhenius law in the thermal regime but is greatly enhanced when MQT becomes dominant.Comment: 4 pages, 4 figures, 1 tabl
    • …
    corecore