1,000 research outputs found

    Non-Abelian Quantum Hall Effect in Topological Flat Bands

    Full text link
    Inspired by recent theoretical discovery of robust fractional topological phases without a magnetic field, we search for the non-Abelian quantum Hall effect (NA-QHE) in lattice models with topological flat bands (TFBs). Through extensive numerical studies on the Haldane model with three-body hard-core bosons loaded into a TFB, we find convincing numerical evidence of a stable ν=1\nu=1 bosonic NA-QHE, with the characteristic three-fold quasi-degeneracy of ground states on a torus, a quantized Chern number, and a robust spectrum gap. Moreover, the spectrum for two-quasihole states also shows a finite energy gap, with the number of states in the lower energy sector satisfying the same counting rule as the Moore-Read Pfaffian state.Comment: 5 pages, 7 figure

    Towards Trustworthy Dataset Distillation

    Full text link
    Efficiency and trustworthiness are two eternal pursuits when applying deep learning in real-world applications. With regard to efficiency, dataset distillation (DD) endeavors to reduce training costs by distilling the large dataset into a tiny synthetic dataset. However, existing methods merely concentrate on in-distribution (InD) classification in a closed-world setting, disregarding out-of-distribution (OOD) samples. On the other hand, OOD detection aims to enhance models' trustworthiness, which is always inefficiently achieved in full-data settings. For the first time, we simultaneously consider both issues and propose a novel paradigm called Trustworthy Dataset Distillation (TrustDD). By distilling both InD samples and outliers, the condensed datasets are capable to train models competent in both InD classification and OOD detection. To alleviate the requirement of real outlier data and make OOD detection more practical, we further propose to corrupt InD samples to generate pseudo-outliers and introduce Pseudo-Outlier Exposure (POE). Comprehensive experiments on various settings demonstrate the effectiveness of TrustDD, and the proposed POE surpasses state-of-the-art method Outlier Exposure (OE). Compared with the preceding DD, TrustDD is more trustworthy and applicable to real open-world scenarios. Our code will be publicly available.Comment: 20 pages, 20 figure

    Robust Classification with Convolutional Prototype Learning

    Full text link
    Convolutional neural networks (CNNs) have been widely used for image classification. Despite its high accuracies, CNN has been shown to be easily fooled by some adversarial examples, indicating that CNN is not robust enough for pattern classification. In this paper, we argue that the lack of robustness for CNN is caused by the softmax layer, which is a totally discriminative model and based on the assumption of closed world (i.e., with a fixed number of categories). To improve the robustness, we propose a novel learning framework called convolutional prototype learning (CPL). The advantage of using prototypes is that it can well handle the open world recognition problem and therefore improve the robustness. Under the framework of CPL, we design multiple classification criteria to train the network. Moreover, a prototype loss (PL) is proposed as a regularization to improve the intra-class compactness of the feature representation, which can be viewed as a generative model based on the Gaussian assumption of different classes. Experiments on several datasets demonstrate that CPL can achieve comparable or even better results than traditional CNN, and from the robustness perspective, CPL shows great advantages for both the rejection and incremental category learning tasks

    2-Chloro­methyl-2,3-dihydro­thieno[3,4-b][1,4]dioxine

    Get PDF
    In the mol­ecule of the title compound, C7H7ClO2S, the six-membered ring adopts a twisted conformation. In the crystal structure, weak inter­molecular C—H⋯O hydrogen bonds link the mol­ecules. There is also a weak C—H⋯π inter­action

    CasNet: Investigating Channel Robustness for Speech Separation

    Full text link
    Recording channel mismatch between training and testing conditions has been shown to be a serious problem for speech separation. This situation greatly reduces the separation performance, and cannot meet the requirement of daily use. In this study, inheriting the use of our previously constructed TAT-2mix corpus, we address the channel mismatch problem by proposing a channel-aware audio separation network (CasNet), a deep learning framework for end-to-end time-domain speech separation. CasNet is implemented on top of TasNet. Channel embedding (characterizing channel information in a mixture of multiple utterances) generated by Channel Encoder is introduced into the separation module by the FiLM technique. Through two training strategies, we explore two roles that channel embedding may play: 1) a real-life noise disturbance, making the model more robust, or 2) a guide, instructing the separation model to retain the desired channel information. Experimental results on TAT-2mix show that CasNet trained with both training strategies outperforms the TasNet baseline, which does not use channel embeddings.Comment: Submitted to ICASSP 202

    catena-Poly[(dichloridozinc)-μ-1-{4-[(1H-imidazol-1-yl)meth­yl]benz­yl}-1H-imidazole-κ2 N 3:N 3′]

    Get PDF
    The asymmetric unit of the title compound, [ZnCl2(C14H14N4)]n, contains a ZnII ion situated on a twofold rotation axis and one-half of a 1-{4-[(1H-imidazol-1-yl)meth­yl]benz­yl}-1H-imidazole (L) ligand with the benzene ring situated on an inversion center. The ZnII ion is coordinated by two chloride anions and two N atoms from two L ligands in a distorted tetra­hedral geometry. The L ligands bridge ZnCl2 fragments into polymeric chains parallel to [20-1]
    corecore