351 research outputs found

    On Saliency Maps and Adversarial Robustness

    Full text link
    A Very recent trend has emerged to couple the notion of interpretability and adversarial robustness, unlike earlier efforts which solely focused on good interpretations or robustness against adversaries. Works have shown that adversarially trained models exhibit more interpretable saliency maps than their non-robust counterparts, and that this behavior can be quantified by considering the alignment between input image and saliency map. In this work, we provide a different perspective to this coupling, and provide a method, Saliency based Adversarial training (SAT), to use saliency maps to improve adversarial robustness of a model. In particular, we show that using annotations such as bounding boxes and segmentation masks, already provided with a dataset, as weak saliency maps, suffices to improve adversarial robustness with no additional effort to generate the perturbations themselves. Our empirical results on CIFAR-10, CIFAR-100, Tiny ImageNet and Flower-17 datasets consistently corroborate our claim, by showing improved adversarial robustness using our method. saliency maps. We also show how using finer and stronger saliency maps leads to more robust models, and how integrating SAT with existing adversarial training methods, further boosts performance of these existing methods.Comment: Accepted at ECML-PKDD 2020, Acknowledgements adde

    Cushing Syndrome Caused by Pituitary and Adrenal Hybrid Tumor: A Rare Case Report

    Get PDF
    Introduction: Cushing syndrome is a rare endocrine disorder caused by a variety of underlying etiologies. It can be due to exogenous or endogenous high cortisone levels (ACTH-dependent or ACTH-independent). We herein report a case of ACTHdependent Cushing syndrome caused by pituitary and adrenal hybrid adenoma. Case report: A 42-year-old female presented with a complaint of hematemesis. She had hirsutism, central obesity and violaceous striae on her abdomen and thigh. On detailed clinical examination and relevant investigation, we found that the cause of hematemesis was esophagitis with necrotic gastric ulcer due to Cushing syndrome caused by the pituitary and adrenal hybrid adenoma. Discussion: Cushing syndrome is a rare endocrine disorder characterized by increased exogenous or endogenous serum cortisol levels, which lead to various clinical presentations. Early identification of the disease and its cause is critical. The entire clinical presentation must be considered for correct diagnosis, which is generally delayed due to the overlapping symptoms of the disease with various specialities. Conclusion: Diagnosis and management of Cushing’s syndrome continues to present considerable challenges and necessitates referral to higher centers. Its diverse presentation warrants a complete clinical, physical, radiological and endocrine examination

    On the benefits of defining vicinal distributions in latent space

    Get PDF
    The vicinal risk minimization (VRM) principle is an empirical risk minimization (ERM) variant that replaces Dirac masses with vicinal functions. There is strong numerical and theoretical evidence showing that VRM outperforms ERM in terms of generalization if appropriate vicinal functions are chosen. Mixup Training (MT), a popular choice of vicinal distribution, improves the generalization performance of models by introducing globally linear behavior in between training examples. Apart from generalization, recent works have shown that mixup trained models are relatively robust to input perturbations/corruptions and at the same time are calibrated better than their non-mixup counterparts. In this work, we investigate the benefits of defining these vicinal distributions like mixup in latent space of generative models rather than in input space itself. We propose a new approach - \textit{VarMixup (Variational Mixup)} - to better sample mixup images by using the latent manifold underlying the data. Our empirical studies on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that models trained by performing mixup in the latent manifold learned by VAEs are inherently more robust to various input corruptions/perturbations, are significantly better calibrated, and exhibit more local-linear loss landscapes.Comment: Accepted at Elsevier Pattern Recognition Letters (2021), Best Paper Award at CVPR 2021 Workshop on Adversarial Machine Learning in Real-World Computer Vision (AML-CV), Also accepted at ICLR 2021 Workshops on Robust-Reliable Machine Learning (Oral) and Generalization beyond the training distribution (Abstract

    Applying Dijkstras Algorithm in Routing Process

    Full text link
    Network is defined as a combination of two or more nodes which are connected with each other. It allows nodes to exchange data from each other along the data connections. Routing is a process of finding the path between source and destination upon request of data transmission. There are various routing algorithms which helps in determining the path and distance over the network traffic. For routing of nodes, we can use many routing protocols. Dijkstrarsquos algorithm is one of the best shortest path search algorithms. Our focus and aim is to find the shortest path from source node to destination node. For finding the minimum path this algorithm uses the connection matrix and weight matrix Thus, a matrix consisting of paths from source node to each node is formed. We then choose a column of destination from path matrix formed and we get the shortest path. In a similar way, we choose a column from a mindis matrix for finding the minimum distance from source node to destination node. It has been applied in computer networking for routing of systems and in google maps to find the shortest possible path from one location to another location.nbs

    Charting the Right Manifold: Manifold Mixup for Few-shot Learning

    Get PDF
    Few-shot learning algorithms aim to learn model parameters capable of adapting to unseen classes with the help of only a few labeled examples. A recent regularization technique - Manifold Mixup focuses on learning a generalpurpose representation, robust to small changes in the data distribution. Since the goal of few-shot learning is closely linked to robust representation learning, we study Manifold Mixup in this problem setting. Self-supervised learning is another technique that learns semantically meaningful features, using only the inherent structure of the data. This work investigates the role of learning relevant feature manifold for few-shot tasks using self-supervision and regularization techniques. We observe that regularizing the feature manifold, enriched via self-supervised techniques, with Manifold Mixup significantly improves few-shot learning performance. We show that our proposed method S2M2 beats the current state-of-the-art accuracy on standard few-shot learning datasets like CIFAR-FS, CUB and miniImageNet by 3 − 8%. Through extensive experimentation, we show that the features learned using our approach generalize to complex few-shot evaluation tasks, cross-domain scenarios and are robust against slight changes to data distribution

    AdvGAN++ : Harnessing latent layers for adversary generation

    Get PDF
    Adversarial examples are fabricated examples, indistinguishable from the original image that mislead neural networks and drastically lower their performance. Recently proposed AdvGAN, a GAN based approach, takes input image as a prior for generating adversaries to target a model. In this work, we show how latent features can serve as better priors than input images for adversary generation by proposing AdvGAN++, a version of AdvGAN that achieves higher attack rates than AdvGAN and at the same time generates perceptually realistic images on MNIST and CIFAR10 datasets

    Development and evaluation of modified locust bean microparticles for controlled drug delivery

    Get PDF
    The objective of the present study was to minimize the unwanted toxic effects of antihypertensive drug diltiazem hydrochloride (DTZ) by kinetic control of drug release from microparticles using chemically modified locust bean gum (MLGB) as carrier by emulsification method. DTZ was entrapped into gastro resistant, biodegradable locust bean gum microparticles using emulsification method. Solid, discrete, reproducible free flowing microparticles were obtained. The yield of the microparticles was up to 95 %. More than 97 % of the isolated microparticles were of particle size range of 325 to 455 µm. The obtained angle of repose, % Carr index and tapped density values were well within the limits, indicating that prepared microparticles had smooth surface, free flowing and good packing properties. Scanning Electron Microscopy photographs and calculated sphericity factor confirms that the prepared formulations are spherical in nature. Prepared microparticles were stable and compatible, as confirmed by DSC and FT-IR studies. It was observed that there is no significant release of the drug at gastric pH. The drug release was controlled more than 12 h. Intestinal drug release from microparticles was studied and compared with the release behavior of commercially available oral formulation Cardizam® CD. The release kinetics followed different transport mechanisms.Colegio de Farmacéuticos de la Provincia de Buenos Aire
    corecore