16 research outputs found

    Efficient Optimization for Rank-based Loss Functions

    Full text link
    The accuracy of information retrieval systems is often measured using complex loss functions such as the average precision (AP) or the normalized discounted cumulative gain (NDCG). Given a set of positive and negative samples, the parameters of a retrieval system can be estimated by minimizing these loss functions. However, the non-differentiability and non-decomposability of these loss functions does not allow for simple gradient based optimization algorithms. This issue is generally circumvented by either optimizing a structured hinge-loss upper bound to the loss function or by using asymptotic methods like the direct-loss minimization framework. Yet, the high computational complexity of loss-augmented inference, which is necessary for both the frameworks, prohibits its use in large training data sets. To alleviate this deficiency, we present a novel quicksort flavored algorithm for a large class of non-decomposable loss functions. We provide a complete characterization of the loss functions that are amenable to our algorithm, and show that it includes both AP and NDCG based loss functions. Furthermore, we prove that no comparison based algorithm can improve upon the computational complexity of our approach asymptotically. We demonstrate the effectiveness of our approach in the context of optimizing the structured hinge loss upper bound of AP and NDCG loss for learning models for a variety of vision tasks. We show that our approach provides significantly better results than simpler decomposable loss functions, while requiring a comparable training time.Comment: 15 pages, 2 figure

    Efficient Optimization for Average Precision SVM

    Get PDF
    International audienceThe accuracy of information retrieval systems is often measured using average precision (AP). Given a set of positive (relevant) and negative (non-relevant) samples, the parameters of a retrieval system can be estimated using the AP-SVM framework, which minimizes a regularized convex upper bound on the empirical AP loss. However, the high computational complexity of loss-augmented inference, which is required for learning an AP-SVM, prohibits its use on large training datasets. To alleviate this deficiency, we propose three complementary approaches. The first approach guarantees an asymptotic decrease in the computational complexity of loss-augmented inference by exploiting the problem structure. The second approach takes advantage of the fact that we do not require a full ranking during loss-augmented inference. This helps us to avoid the expensive step of sorting the negative samples according to their individual scores. The third approach approximates the AP loss over all samples by the AP loss over difficult samples (for example, those that are incorrectly classified by a binary SVM), while ensuring the correct classification of the remaining samples. Using the PASCAL VOC action classification dataset, we show that our approaches provide significant speed-ups during training without degrading the test accuracy of AP-SVM

    Optimizing Average Precision Using Weakly Supervised Data

    No full text

    Transverse Susceptibility as a Biosensor for Detection of Au-Fe3O4 Nanoparticle-Embedded Human Embryonic Kidney Cells

    Get PDF
    We demonstrate the possibility of using a radio-frequency transverse susceptibility (TS) technique based on a sensitive self-resonant tunnel-diode oscillator as a biosensor for detection of cancer cells that have taken up magnetic nanoparticles. This technique can detect changes in frequency on the order of 10 Hz in 10 MHz. Therefore, a small sample of cells that have taken up nanoparticles when placed inside the sample space of the TS probe can yield a signal characteristic of the magnetic nanoparticles. As a proof of the concept, Fe3O4 nanoparticles coated with Au (mean size ~60 nm) were synthesized using a micellar method and these nanoparticles were introduced to the medium at different concentrations of 0.05, 0.1, 0.5, and 1 mg/mL buffer, where they were taken up by human embryonic kidney (HEK) cells via phagocytosis. While the highest concentration of Au-Fe3O4 nanoparticles (1 mg/mL) was found to give the strongest TS signal, it is notable that the TS signal of the nanoparticles could still be detected at concentrations as low as 0.1 mg/mL

    Efficient optimization for rank-based loss functions

    No full text
    The accuracy of information retrieval systems is often measured using complex loss functions such as the average precision (AP) or the normalized discounted cumulative gain (NDCG). Given a set of positive and negative samples, the parameters of a retrieval system can be estimated by minimizing these loss functions. However, the non-differentiability and non-decomposability of these loss functions does not allow for simple gradient based optimization algorithms. This issue is generally circumvented by either optimizing a structured hinge-loss upper bound to the loss function or by using asymptotic methods like the direct-loss minimization framework. Yet, the high computational complexity of loss-augmented inference, which is necessary for both the frameworks, prohibits its use in large training data sets. To alleviate this deficiency, we present a novel quicksort flavored algorithm for a large class of non-decomposable loss functions. We provide a complete characterization of the loss functions that are amenable to our algorithm, and show that it includes both AP and NDCG based loss functions. Furthermore, we prove that no comparison based algorithm can improve upon the computational complexity of our approach asymptotically. We demonstrate the effectiveness of our approach in the context of optimizing the structured hinge loss upper bound of AP and NDCG loss for learning models for a variety of vision tasks. We show that our approach provides significantly better results than simpler decomposable loss functions, while requiring a comparable training time

    Optimizing Average Precision using Weakly Supervised Data

    Get PDF
    International audienceMany tasks in computer vision, such as action classification and object detection, require us to rank a set of samples according to their relevance to a particular visual category. The performance of such tasks is often measured in terms of the average precision (AP). Yet it is common practice to employ the support vector machine (SVM) classifier, which optimizes a surrogate 0-1 loss. The popularity of SVM can be attributed to its empirical performance. Specifically, in fully supervised settings, SVM tends to provide similar accuracy to AP-SVM, which directly optimizes an AP-based loss. However, we hypothesize that in the significantly more challenging and practically useful setting of weakly supervised learning, it becomes crucial to optimize the right accuracy measure. In order to test this hypothesis, we propose a novel latent AP-SVM that minimizes a carefully designed upper bound on the AP-based loss function over weakly supervised samples. Using publicly available datasets, we demonstrate the advantage of our approach over standard loss-based learning frameworks on three challenging problems: action classification, character recognition and object detection

    Extrapolation of Inter Domain Communications and Substrate Binding Cavity of Camel HSP70 1A: A Molecular Modeling and Dynamics Simulation Study

    No full text
    <div><p>Heat shock protein 70 (HSP70) is an important chaperone, involved in protein folding, refolding, translocation and complex remodeling reactions under normal as well as stress conditions. However, expression of HSPA1A gene in heat and cold stress conditions associates with other chaperons and perform its function. Experimental structure for Camel HSP70 protein (cHSP70) has not been reported so far. Hence, we constructed 3D models of cHSP70 through multi- template comparative modeling with HSP110 protein of <i>S</i>. <i>cerevisiae</i> (open state) and with HSP70 protein of <i>E</i>. <i>coli</i> 70kDa DnaK (close state) and relaxed them for 100 nanoseconds (ns) using all-atom Molecular Dynamics (MD) Simulation. Two stable conformations of cHSP70 with Substrate Binding Domain (SBD) in open and close states were obtained. The collective mode analysis of different transitions of open state to close state and vice versa was examined via Principal Component Analysis (PCA) and Minimum Distance Matrix (MDM). The results provide mechanistic representation of the communication between Nucleotide Binding Domain (NBD) and SBD to identify the role of sub domains in conformational change mechanism, which leads the chaperone cycle of cHSP70. Further, residues present in the chaperon functioning site were also identified through protein-peptide docking. This study provides an overall insight into the inter domain communication mechanism and identification of the chaperon binding cavity, which explains the underlying mechanism involved during heat and cold stress conditions in camel.</p></div

    Typical average 3D structure represented in cartoon diagram of open state of cHSP70 rotated by 90° after relaxation through MD simulation.

    No full text
    <p>Typical average 3D structure represented in cartoon diagram of open state of cHSP70 rotated by 90° after relaxation through MD simulation.</p

    Cartoon repersentation of dockedcomplexes.

    No full text
    <p>(a) SBD-β with TRP2_F5L/F6L (c) SBD-β with NR. The molecular interaction plots: (b) SBD-β with TRP2_F5L/F6L (d) SBD-β with NR. The complexesare generated via LIGPLOT.</p
    corecore