1,150 research outputs found

    Odd-parity superconductivity by competing spin-orbit coupling and orbital effect in artificial heterostructures

    Get PDF
    We show that odd-parity superconductivity occurs in multilayer Rashba systems without requiring spin-triplet Cooper pairs. A pairing interaction in the spin-singlet channel stabilizes the odd-parity pair-density-wave (PDW) state in the magnetic field parallel to the two-dimensional conducting plane. It is shown that the layer-dependent Rashba spin-orbit coupling and the orbital effect play essential roles for the PDW state in binary and tricolor heterostructures. We demonstrate that the odd-parity PDW state is a symmetry-protected topological superconducting state characterized by the one-dimensional winding number in the symmetry class BDI. The superconductivity in the artificial heavy-fermion superlattice CeCoIn_5/YbCoIn_5 and bilayer interface SrTiO_3/LaAlO_3 is discussed.Comment: To be published in Phys. Rev.

    Maximum Classifier Discrepancy for Unsupervised Domain Adaptation

    Full text link
    In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the features as either a source or target and train a feature generator network to mimic the discriminator. Two problems exist with these methods. First, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambiguous features near class boundaries. Second, these methods aim to completely match the feature distributions between different domains, which is difficult because of each domain's characteristics. To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries. We propose to maximize the discrepancy between two classifiers' outputs to detect target samples that are far from the support of the source. A feature generator learns to generate target features near the support to minimize the discrepancy. Our method outperforms other methods on several datasets of image classification and semantic segmentation. The codes are available at \url{https://github.com/mil-tokyo/MCD_DA}Comment: Accepted to CVPR2018 Oral, Code is available at https://github.com/mil-tokyo/MCD_D

    Crystalline Electronic Field and Magnetic Anisotropy in Dy-based Icosahedral Quasicrystal and Approximant

    Full text link
    The lack of the theory of the crystalline electric field (CEF) in rare-earth based quasicrystal (QC) and approximant crystal (AC) has prevented us from understanding the electronic states. Recent success of the formulation of the CEF theory on the basis of the point charge model has made it possible to analyze the CEF microscopically. Here, by applying this formulation to the QC Au-SM-Dy (SM=Si, Ge, Al, and Ga) and AC, we theoretically analyze the CEF. In the Dy3+^{3+} ion with 4f94f^9 configuration, the CEF Hamiltonian is diagonalized by the basis set for the total angular momentum J=15/2J=15/2. The ratio of the valences of the screened ligand ions α=ZSM/ZAu\alpha=Z_{\rm SM}/Z_{\rm Au} plays an important role in characterizing the CEF ground state. For 0α<0.300\le\alpha<0.30, the magnetic easy axis for the CEF ground state is shown to be perpendicular to the mirror plane. On the other hand, for α>0.30\alpha>0.30, the magnetic easy axis is shown to be lying in the mirror plane and as α\alpha increases, the easy axis rotates to the clockwise direction in the mirror plane at the Dy site and tends to approach the pseudo 5 fold axis. Possible relevance of these results to experiments is discussed.Comment: 6 pages, 6 figure

    Transfer learning of language-independent end-to-end ASR with language model fusion

    Full text link
    This work explores better adaptation methods to low-resource languages using an external language model (LM) under the framework of transfer learning. We first build a language-independent ASR system in a unified sequence-to-sequence (S2S) architecture with a shared vocabulary among all languages. During adaptation, we perform LM fusion transfer, where an external LM is integrated into the decoder network of the attention-based S2S model in the whole adaptation stage, to effectively incorporate linguistic context of the target language. We also investigate various seed models for transfer learning. Experimental evaluations using the IARPA BABEL data set show that LM fusion transfer improves performances on all target five languages compared with simple transfer learning when the external text data is available. Our final system drastically reduces the performance gap from the hybrid systems.Comment: Accepted at ICASSP201
    corecore