6 research outputs found

    Improving Performance of Semi-Supervised Learning by Adversarial Attacks

    Full text link
    Semi-supervised learning (SSL) algorithm is a setup built upon a realistic assumption that access to a large amount of labeled data is tough. In this study, we present a generalized framework, named SCAR, standing for Selecting Clean samples with Adversarial Robustness, for improving the performance of recent SSL algorithms. By adversarially attacking pre-trained models with semi-supervision, our framework shows substantial advances in classifying images. We introduce how adversarial attacks successfully select high-confident unlabeled data to be labeled with current predictions. On CIFAR10, three recent SSL algorithms with SCAR result in significantly improved image classification.Comment: 4 page

    Transmembrane topology and oligomeric nature of an astrocytic membrane protein, MLC1

    Get PDF
    MLC1 is a membrane protein mainly expressed in astrocytes, and genetic mutations lead to the development of a leukodystrophy, megalencephalic leukoencephalopathy with subcortical cysts disease. Currently, the biochemical properties of the MLC1 protein are largely unknown. In this study, we aimed to characterize the transmembrane (TM) topology and oligomeric nature of the MLC1 protein. Systematic immunofluorescence staining data revealed that the MLC1 protein has eight TM domains and that both the N- and C-terminus face the cytoplasm. We found that MLC1 can be purified as an oligomer and could form a trimeric complex in both detergent micelles and reconstituted proteoliposomes. Additionally, a single-molecule photobleaching experiment showed that MLC1 protein complexes could consist of three MLC1 monomers in the reconstituted proteoliposomes. These results can provide a basis for both the high-resolution structural determination and functional characterization of the MLC1 protein.1

    SLIDE: A surrogate fairness constraint to ensure fairness consistency

    No full text
    ยฉ 2022 Elsevier LtdAs they take a crucial role in social decision makings, AI algorithms based on ML models should be not only accurate but also fair. Among many algorithms for fair AI, learning a prediction ML model by minimizing the empirical risk (e.g., cross-entropy) subject to a given fairness constraint has received much attention. To avoid computational difficulty, however, a given fairness constraint is replaced by a surrogate fairness constraint as the 0โ€“1 loss is replaced by a convex surrogate loss for classification problems. In this paper, we investigate the validity of existing surrogate fairness constraints and propose a new surrogate fairness constraint called SLIDE, which is computationally feasible and asymptotically valid in the sense that the learned model satisfies the fairness constraint asymptotically and achieves a fast convergence rate. Numerical experiments confirm that the SLIDE works well for various benchmark datasets.N

    SLIDE: a surrogate fairness constraint to ensure fairness consistency

    Full text link
    As they have a vital effect on social decision makings, AI algorithms should be not only accurate and but also fair. Among various algorithms for fairness AI, learning a prediction model by minimizing the empirical risk (e.g., cross-entropy) subject to a given fairness constraint has received much attention. To avoid computational difficulty, however, a given fairness constraint is replaced by a surrogate fairness constraint as the 0-1 loss is replaced by a convex surrogate loss for classification problems. In this paper, we investigate the validity of existing surrogate fairness constraints and propose a new surrogate fairness constraint called SLIDE, which is computationally feasible and asymptotically valid in the sense that the learned model satisfies the fairness constraint asymptotically and achieves a fast convergence rate. Numerical experiments confirm that the SLIDE works well for various benchmark datasets.Comment: 17 pages including appendix and reference

    Learning fair representation with a parametric integral probability metric

    Full text link
    As they have a vital effect on social decision-making, AI algorithms should be not only accurate but also fair. Among various algorithms for fairness AI, learning fair representation (LFR), whose goal is to find a fair representation with respect to sensitive variables such as gender and race, has received much attention. For LFR, the adversarial training scheme is popularly employed as is done in the generative adversarial network type algorithms. The choice of a discriminator, however, is done heuristically without justification. In this paper, we propose a new adversarial training scheme for LFR, where the integral probability metric (IPM) with a specific parametric family of discriminators is used. The most notable result of the proposed LFR algorithm is its theoretical guarantee about the fairness of the final prediction model, which has not been considered yet. That is, we derive theoretical relations between the fairness of representation and the fairness of the prediction model built on the top of the representation (i.e., using the representation as the input). Moreover, by numerical experiments, we show that our proposed LFR algorithm is computationally lighter and more stable, and the final prediction model is competitive or superior to other LFR algorithms using more complex discriminators.Comment: 28 pages, including references and appendi
    corecore