4,117 research outputs found

    Learning Segmentation Masks with the Independence Prior

    Full text link
    An instance with a bad mask might make a composite image that uses it look fake. This encourages us to learn segmentation by generating realistic composite images. To achieve this, we propose a novel framework that exploits a new proposed prior called the independence prior based on Generative Adversarial Networks (GANs). The generator produces an image with multiple category-specific instance providers, a layout module and a composition module. Firstly, each provider independently outputs a category-specific instance image with a soft mask. Then the provided instances' poses are corrected by the layout module. Lastly, the composition module combines these instances into a final image. Training with adversarial loss and penalty for mask area, each provider learns a mask that is as small as possible but enough to cover a complete category-specific instance. Weakly supervised semantic segmentation methods widely use grouping cues modeling the association between image parts, which are either artificially designed or learned with costly segmentation labels or only modeled on local pairs. Unlike them, our method automatically models the dependence between any parts and learns instance segmentation. We apply our framework in two cases: (1) Foreground segmentation on category-specific images with box-level annotation. (2) Unsupervised learning of instance appearances and masks with only one image of homogeneous object cluster (HOC). We get appealing results in both tasks, which shows the independence prior is useful for instance segmentation and it is possible to unsupervisedly learn instance masks with only one image.Comment: 7+5 pages, 13 figures, Accepted to AAAI 201

    The Ameliorate Effect of Endomorphin 1 on the Mice Nephrotoxicity Induced by Cadmium

    Get PDF
    AbstractTo wonder whether endomorphin 1(EM1), the antioxidative peptide, can protect against the renal toxicology of cadmium (Cd), which probably related to the oxidative injury.MethodsIn vivo assays have been designed and performed, such as the measurement of oxidative damage parameters and the index of antioxidative system.ResultData from our study demonstrated the effect of EM1 could ameliorate the increased concentration of lipid peroxidation products and protein carboxylatio and increase the content of antioxidative system, the antioxidant capacity of EM1 probably relate to its structure.ConclusionOur study first demonstrated the nephrotoxicity induced by Cd can be suppressed by the treatment of EM1

    Structure-activity Study of Endomorphins Analogs with C- terminal Substitution

    Get PDF
    AbstractAims: To further wonder the influence of C-terminal residues on the pharmacological4 activities.Methods: The in vitro and in vivo opioid activities of C-terminal substitution analogs [L-Tic] EM1 and [L-Tic] EM2 were investigated using radioligand binding assay, guinea pig ileum (GPI) assay, mouse vas deferens (MVD) assay, systemic arterial pressure (SAP) assay and tail-flick test.Results: Our data showed that the analogs produced a higher δ-opioid affinity but low colon-opioid affinity, dose-dependent but reduced analgesic activities and cardiovascular effect comparing with those of EMs. Moreover, these effects induced by the analogs can be inhibited by naloxone, indicating an opioid mechanism.Conclusion: These results provided suggestive evidences that the substitution of C-terminal residue may play an important role in the regulation of opioid affinities and pharmacological activities

    Theory Modeling and Empirical Evidence for Value-at-Risk based Assets Allocation Insurance Strategies

    Get PDF
    Constant Proportion Portfolio Insurance (CPPI) is the most popular portfolio insurance strategy using hedging strategy to protect principal while a wave upward or downward trend in the market is noted. Nevertheless, since the original CPPI was proposed, its performance has been limited to relevant parameters of strategy. And since there is no clear, definite and systematic rule of decision has get been proposed, it also has unstable performance and worse upside capture, especially for the multiplier (Mv) in model parameters, it has far great influence to end-of-period return. If Mv can be decided with its initial value setting and dynamic tuning via certain appropriate approach, under a decent mechanism of market timing selection, the strategy can therefore acquire excess return of min-max operation due to sharp improvement of upside capture, and also can provide hedging function within the insured volume when the market declines. This paper presents a systematic method using the value-at-risk control method to dynamically adjust the CPPI strategy parameter Mv, called asset allocation insurance strategy value-at-risk based asset allocation insurance strategy model (VALIS). We proof that the proposed model is a dynamic asset allocation insurance strategy, which is conservative but also aggressive; and shows that it is in compliance with the characteristics of idea portfolio insurance strategy, and is feasible and effective. From an empirical study of the Pan-Pacific market, we found that in any type of market or trend it is clearly better than the major benchmark indices, and it outperform other traditional portfolio insurance strategy

    Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table Transformers

    Full text link
    For machine learning with tabular data, Table Transformer (TabTransformer) is a state-of-the-art neural network model, while Differential Privacy (DP) is an essential component to ensure data privacy. In this paper, we explore the benefits of combining these two aspects together in the scenario of transfer learning -- differentially private pre-training and fine-tuning of TabTransformers with a variety of parameter-efficient fine-tuning (PEFT) methods, including Adapter, LoRA, and Prompt Tuning. Our extensive experiments on the ACSIncome dataset show that these PEFT methods outperform traditional approaches in terms of the accuracy of the downstream task and the number of trainable parameters, thus achieving an improved trade-off among parameter efficiency, privacy, and accuracy. Our code is available at github.com/IBM/DP-TabTransformer.Comment: submitted to ICASSP 202

    Block Switching: A Stochastic Approach for Deep Learning Security

    Full text link
    Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models. That is, subtly crafted perturbations of the input can make a trained network with high accuracy produce arbitrary incorrect predictions, while maintain imperceptible to human vision system. In this paper, we introduce Block Switching (BS), a defense strategy against adversarial attacks based on stochasticity. BS replaces a block of model layers with multiple parallel channels, and the active channel is randomly assigned in the run time hence unpredictable to the adversary. We show empirically that BS leads to a more dispersed input gradient distribution and superior defense effectiveness compared with other stochastic defenses such as stochastic activation pruning (SAP). Compared to other defenses, BS is also characterized by the following features: (i) BS causes less test accuracy drop; (ii) BS is attack-independent and (iii) BS is compatible with other defenses and can be used jointly with others.Comment: Accepted by AdvML19: Workshop on Adversarial Learning Methods for Machine Learning and Data Mining at KDD, Anchorage, Alaska, USA, August 5th, 2019, 5 page
    • …
    corecore