3,453 research outputs found

    Learning Segmentation Masks with the Independence Prior

    Full text link
    An instance with a bad mask might make a composite image that uses it look fake. This encourages us to learn segmentation by generating realistic composite images. To achieve this, we propose a novel framework that exploits a new proposed prior called the independence prior based on Generative Adversarial Networks (GANs). The generator produces an image with multiple category-specific instance providers, a layout module and a composition module. Firstly, each provider independently outputs a category-specific instance image with a soft mask. Then the provided instances' poses are corrected by the layout module. Lastly, the composition module combines these instances into a final image. Training with adversarial loss and penalty for mask area, each provider learns a mask that is as small as possible but enough to cover a complete category-specific instance. Weakly supervised semantic segmentation methods widely use grouping cues modeling the association between image parts, which are either artificially designed or learned with costly segmentation labels or only modeled on local pairs. Unlike them, our method automatically models the dependence between any parts and learns instance segmentation. We apply our framework in two cases: (1) Foreground segmentation on category-specific images with box-level annotation. (2) Unsupervised learning of instance appearances and masks with only one image of homogeneous object cluster (HOC). We get appealing results in both tasks, which shows the independence prior is useful for instance segmentation and it is possible to unsupervisedly learn instance masks with only one image.Comment: 7+5 pages, 13 figures, Accepted to AAAI 201

    The Ameliorate Effect of Endomorphin 1 on the Mice Nephrotoxicity Induced by Cadmium

    Get PDF
    AbstractTo wonder whether endomorphin 1(EM1), the antioxidative peptide, can protect against the renal toxicology of cadmium (Cd), which probably related to the oxidative injury.MethodsIn vivo assays have been designed and performed, such as the measurement of oxidative damage parameters and the index of antioxidative system.ResultData from our study demonstrated the effect of EM1 could ameliorate the increased concentration of lipid peroxidation products and protein carboxylatio and increase the content of antioxidative system, the antioxidant capacity of EM1 probably relate to its structure.ConclusionOur study first demonstrated the nephrotoxicity induced by Cd can be suppressed by the treatment of EM1

    Structure-activity Study of Endomorphins Analogs with C- terminal Substitution

    Get PDF
    AbstractAims: To further wonder the influence of C-terminal residues on the pharmacological4 activities.Methods: The in vitro and in vivo opioid activities of C-terminal substitution analogs [L-Tic] EM1 and [L-Tic] EM2 were investigated using radioligand binding assay, guinea pig ileum (GPI) assay, mouse vas deferens (MVD) assay, systemic arterial pressure (SAP) assay and tail-flick test.Results: Our data showed that the analogs produced a higher δ-opioid affinity but low colon-opioid affinity, dose-dependent but reduced analgesic activities and cardiovascular effect comparing with those of EMs. Moreover, these effects induced by the analogs can be inhibited by naloxone, indicating an opioid mechanism.Conclusion: These results provided suggestive evidences that the substitution of C-terminal residue may play an important role in the regulation of opioid affinities and pharmacological activities

    Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table Transformers

    Full text link
    For machine learning with tabular data, Table Transformer (TabTransformer) is a state-of-the-art neural network model, while Differential Privacy (DP) is an essential component to ensure data privacy. In this paper, we explore the benefits of combining these two aspects together in the scenario of transfer learning -- differentially private pre-training and fine-tuning of TabTransformers with a variety of parameter-efficient fine-tuning (PEFT) methods, including Adapter, LoRA, and Prompt Tuning. Our extensive experiments on the ACSIncome dataset show that these PEFT methods outperform traditional approaches in terms of the accuracy of the downstream task and the number of trainable parameters, thus achieving an improved trade-off among parameter efficiency, privacy, and accuracy. Our code is available at github.com/IBM/DP-TabTransformer.Comment: submitted to ICASSP 202

    A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity

    Full text link
    Vision Transformers (ViTs) with self-attention modules have recently achieved great empirical success in many vision tasks. Due to non-convex interactions across layers, however, theoretical learning and generalization analysis is mostly elusive. Based on a data model characterizing both label-relevant and label-irrelevant tokens, this paper provides the first theoretical analysis of training a shallow ViT, i.e., one self-attention layer followed by a two-layer perceptron, for a classification task. We characterize the sample complexity to achieve a zero generalization error. Our sample complexity bound is positively correlated with the inverse of the fraction of label-relevant tokens, the token noise level, and the initial model error. We also prove that a training process using stochastic gradient descent (SGD) leads to a sparse attention map, which is a formal verification of the general intuition about the success of attention. Moreover, this paper indicates that a proper token sparsification can improve the test performance by removing label-irrelevant and/or noisy tokens, including spurious correlations. Empirical experiments on synthetic data and CIFAR-10 dataset justify our theoretical results and generalize to deeper ViTs
    • …
    corecore