47 research outputs found

    EC Agricultural Prices. Price Indices and absolute prices-Quarterly Statistics 1-1993

    Get PDF
    We propose MAD-GAN, an intuitive generalization to the Generative Adversarial Networks (GANs) and its conditional variants to address the well known problem of mode collapse. First, MAD-GAN is a multi-agent GAN architecture incorporating multiple generators and one discriminator. Second, to enforce that different generators capture diverse high probability modes, the discriminator of MAD-GAN is designed such that along with finding the real and fake samples, it is also required to identify the generator that generated the given fake sample. Intuitively, to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. We perform extensive experiments on synthetic and real datasets and compare MAD-GAN with different variants of GAN. We show high quality diverse sample generations for challenging tasks such as image-to-image translation and face generation. In addition, we also show that MAD-GAN is able to disentangle different modalities when trained using highly challenging diverse-class dataset (e.g. dataset with images of forests, icebergs, and bedrooms). In the end, we show its efficacy on the unsupervised feature representation task

    Calibrating Deep Neural Networks using Focal Loss

    Full text link
    Miscalibration -- a mismatch between a model's confidence and its correctness -- of Deep Neural Networks (DNNs) makes their predictions hard to rely on. Ideally, we want networks to be accurate, calibrated and confident. We show that, as opposed to the standard cross-entropy loss, focal loss (Lin et al., 2017) allows us to learn models that are already very well calibrated. When combined with temperature scaling, whilst preserving accuracy, it yields state-of-the-art calibrated models. We provide a thorough analysis of the factors causing miscalibration, and use the insights we glean from this to justify the empirically excellent performance of focal loss. To facilitate the use of focal loss in practice, we also provide a principled approach to automatically select the hyperparameter involved in the loss function. We perform extensive experiments on a variety of computer vision and NLP datasets, and with a wide variety of network architectures, and show that our approach achieves state-of-the-art accuracy and calibration in almost all cases

    Phosphate binding sites identification in protein structures

    Get PDF
    Nearly half of known protein structures interact with phosphate-containing ligands, such as nucleotides and other cofactors. Many methods have been developed for the identification of metal ions-binding sites and some for bigger ligands such as carbohydrates, but none is yet available for the prediction of phosphate-binding sites. Here we describe Pfinder, a method that predicts binding sites for phosphate groups, both in the form of ions or as parts of other non-peptide ligands, in proteins of known structure. Pfinder uses the Query3D local structural comparison algorithm to scan a protein structure for the presence of a number of structural motifs identified for their ability to bind the phosphate chemical group. Pfinder has been tested on a data set of 52 proteins for which both the apo and holo forms were available. We obtained at least one correct prediction in 63% of the holo structures and in 62% of the apo. The ability of Pfinder to recognize a phosphate-binding site in unbound protein structures makes it an ideal tool for functional annotation and for complementing docking and drug design methods. The Pfinder program is available at http://pdbfun.uniroma2.it/pfinder

    Identification of Mannose Interacting Residues Using Local Composition

    Get PDF
    BACKGROUND: Mannose binding proteins (MBPs) play a vital role in several biological functions such as defense mechanisms. These proteins bind to mannose on the surface of a wide range of pathogens and help in eliminating these pathogens from our body. Thus, it is important to identify mannose interacting residues (MIRs) in order to understand mechanism of recognition of pathogens by MBPs. RESULTS: This paper describes modules developed for predicting MIRs in a protein. Support vector machine (SVM) based models have been developed on 120 mannose binding protein chains, where no two chains have more than 25% sequence similarity. SVM models were developed on two types of datasets: 1) main dataset consists of 1029 mannose interacting and 1029 non-interacting residues, 2) realistic dataset consists of 1029 mannose interacting and 10320 non-interacting residues. In this study, firstly, we developed standard modules using binary and PSSM profile of patterns and got maximum MCC around 0.32. Secondly, we developed SVM modules using composition profile of patterns and achieved maximum MCC around 0.74 with accuracy 86.64% on main dataset. Thirdly, we developed a model on a realistic dataset and achieved maximum MCC of 0.62 with accuracy 93.08%. Based on this study, a standalone program and web server have been developed for predicting mannose interacting residues in proteins (http://www.imtech.res.in/raghava/premier/). CONCLUSIONS: Compositional analysis of mannose interacting and non-interacting residues shows that certain types of residues are preferred in mannose interaction. It was also observed that residues around mannose interacting residues have a preference for certain types of residues. Composition of patterns/peptide/segment has been used for predicting MIRs and achieved reasonable high accuracy. It is possible that this novel strategy may be effective to predict other types of interacting residues. This study will be useful in annotating the function of protein as well as in understanding the role of mannose in the immune system

    Targeting REP:GGTase-II Interaction and Finding New Means to Predict the Protein:Ligand Interactions

    No full text

    Towards diverse generation and reliable classification using neural networks

    No full text
    Humans can easily understand their surroundings, analyze the situation, and even figure out the intentions of others, all in a few seconds. However, computers see the surroundings as an array of numbers and we need to write algorithms for them to make a sense of it. It is a huge challenge but significant strides have been made toward the scene understanding problem over the last decade, thanks to an increase in computing infrastructure and datasets. However, several challenges remain in making computers more 'intelligent', some of which are tackled in this thesis. This thesis develops different techniques to create more reliable systems and proposes innovations in loss functions by analyzing the system's prediction distributions. First, we try to address the well-known problem of mode collapse in Generative Adversarial Networks (GANs) and propose MAD-GAN, an intuitive generalization of the GANs. MAD-GAN is a multi-agent GAN architecture incorporating multiple generators and one discriminator. To enforce that different generators capture diverse high probability modes, the objective function of the discriminator is designed such that along with identifying the real and fake samples, it is also required to identify the generator that generated the sample if it is fake. Intuitively, to succeed in this task, the discriminator must learn to push different generators toward different identifiable modes. Extensive experiments on synthetic and real datasets demonstrate the effectiveness of the MAD-GAN approach. We then consider the task of semantic segmentation using weak supervision with bounding box annotations. Bounding boxes are noisy labels for the foreground objects. To focus on foreground objects, our approach predicts a per-class attention map that saliently guides the per-pixel cross-entropy loss and refines the segmentation boundaries. This avoids propagating erroneous gradients due to the incorrect foreground labels. Additionally, our approach learns pixel embeddings to simultaneously optimize for high intra-class feature affinity while increasing discrimination between features across different classes. This helps in capturing global context via long-range pairwise interactions. Qualitative and quantitative results along with ablation studies show the benefit of different loss terms on the overall performance. The widespread usage of neural networks is particularly dependent on their reliability and trustworthiness. We therefore also consider the important task of calibrating deep neural networks ā€“ where a modelā€™s confidence should be the same as its correctness. Ideally, we want networks to be accurate, calibrated, and confident. We provide a thorough analysis of the factors causing miscalibration and use the obtained insights to justify the empirically excellent performance of focal loss. We show that, as opposed to the standard cross-entropy loss, focal loss allows us to learn models that are already very well-calibrated. When combined with temperature scaling, whilst preserving accuracy, focal loss yields state-of-the-art calibrated models. To facilitate its use in practice, we also provide a principled approach to automatically select the hyperparameter involved in the loss function. Extensive experiments on a variety of computer vision and NLP datasets, and with a wide variety of network architectures, justify that this approach achieves state-of-the-art calibration without compromising accuracy in almost all cases. One-shot video object segmentation is an important problem in computer vision with many interesting real-world applications. To tackle this, we finally introduce a similarity learning approach, that can learn to perform dense label transfer from one image to the other. More specifically, the objective is to learn a similarity metric for dense pixel-wise correspondence between two images. This learned model can then be used in a label transfer framework to propagate object annotation from the reference frame to all the subsequent frames in a video. Unlike previous methods, our similarity learning approach works fairly well across various domains, even when no domain adaptation is involved. We demonstrate the effectiveness of our method on two standard datasets with favourable results. This approach not only provides good object segmentation but is also time-efficient. Therefore, using it we also achieved second place in the first DAVIS challenge for interactive video object segmentation, in both quality and speed tracks.</p
    corecore