210 research outputs found

    A Knowledge Transfer Framework for Differentially Private Sparse Learning

    Full text link
    We study the problem of estimating high dimensional models with underlying sparse structures while preserving the privacy of each training example. We develop a differentially private high-dimensional sparse learning framework using the idea of knowledge transfer. More specifically, we propose to distill the knowledge from a "teacher" estimator trained on a private dataset, by creating a new dataset from auxiliary features, and then train a differentially private "student" estimator using this new dataset. In addition, we establish the linear convergence rate as well as the utility guarantee for our proposed method. For sparse linear regression and sparse logistic regression, our method achieves improved utility guarantees compared with the best known results (Kifer et al., 2012; Wang and Gu, 2019). We further demonstrate the superiority of our framework through both synthetic and real-world data experiments.Comment: 24 pages, 2 figures, 3 table

    ProDis-ContSHC: learning protein dissimilarity measures and hierarchical context coherently for protein-protein comparison in protein database retrieval

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The need to retrieve or classify protein molecules using structure or sequence-based similarity measures underlies a wide range of biomedical applications. Traditional protein search methods rely on a pairwise dissimilarity/similarity measure for comparing a pair of proteins. This kind of pairwise measures suffer from the limitation of neglecting the distribution of other proteins and thus cannot satisfy the need for high accuracy of the retrieval systems. Recent work in the machine learning community has shown that exploiting the global structure of the database and learning the contextual dissimilarity/similarity measures can improve the retrieval performance significantly. However, most existing contextual dissimilarity/similarity learning algorithms work in an unsupervised manner, which does not utilize the information of the known class labels of proteins in the database.</p> <p>Results</p> <p>In this paper, we propose a novel protein-protein dissimilarity learning algorithm, ProDis-ContSHC. ProDis-ContSHC regularizes an existing dissimilarity measure <it>d<sub>ij </sub></it>by considering the contextual information of the proteins. The context of a protein is defined by its neighboring proteins. The basic idea is, for a pair of proteins (<it>i</it>, <it>j</it>), if their context <inline-formula><m:math xmlns:m="http://www.w3.org/1998/Math/MathML" name="1471-2105-13-S7-S2-i1"><m:mi mathvariant="script">N</m:mi><m:mrow><m:mo class="MathClass-open">(</m:mo><m:mrow><m:mi>i</m:mi></m:mrow><m:mo class="MathClass-close">)</m:mo></m:mrow></m:math></inline-formula> and <inline-formula><m:math xmlns:m="http://www.w3.org/1998/Math/MathML" name="1471-2105-13-S7-S2-i2"><m:mi mathvariant="script">N</m:mi><m:mrow><m:mo class="MathClass-open">(</m:mo><m:mrow><m:mi>j</m:mi></m:mrow><m:mo class="MathClass-close">)</m:mo></m:mrow></m:math></inline-formula> is similar to each other, the two proteins should also have a high similarity. We implement this idea by regularizing <it>d<sub>ij </sub></it>by a factor learned from the context <inline-formula><m:math xmlns:m="http://www.w3.org/1998/Math/MathML" name="1471-2105-13-S7-S2-i3"><m:mi mathvariant="script">N</m:mi><m:mrow><m:mo class="MathClass-open">(</m:mo><m:mrow><m:mi>i</m:mi></m:mrow><m:mo class="MathClass-close">)</m:mo></m:mrow></m:math></inline-formula> and <inline-formula><m:math xmlns:m="http://www.w3.org/1998/Math/MathML" name="1471-2105-13-S7-S2-i4"><m:mi mathvariant="script">N</m:mi><m:mrow><m:mo class="MathClass-open">(</m:mo><m:mrow><m:mi>j</m:mi></m:mrow><m:mo class="MathClass-close">)</m:mo></m:mrow></m:math></inline-formula>.</p> <p>Moreover, we divide the context to hierarchial sub-context and get the contextual dissimilarity vector for each protein pair. Using the class label information of the proteins, we select the relevant (a pair of proteins that has the same class labels) and irrelevant (with different labels) protein pairs, and train an SVM model to distinguish between their contextual dissimilarity vectors. The SVM model is further used to learn a supervised regularizing factor. Finally, with the new <b>S</b>upervised learned <b>Dis</b>similarity measure, we update the <b>Pro</b>tein <b>H</b>ierarchial <b>Cont</b>ext <b>C</b>oherently in an iterative algorithm--<b>ProDis-ContSHC</b>.</p> <p>We test the performance of ProDis-ContSHC on two benchmark sets, i.e., the ASTRAL 1.73 database and the FSSP/DALI database. Experimental results demonstrate that plugging our supervised contextual dissimilarity measures into the retrieval systems significantly outperforms the context-free dissimilarity/similarity measures and other unsupervised contextual dissimilarity measures that do not use the class label information.</p> <p>Conclusions</p> <p>Using the contextual proteins with their class labels in the database, we can improve the accuracy of the pairwise dissimilarity/similarity measures dramatically for the protein retrieval tasks. In this work, for the first time, we propose the idea of supervised contextual dissimilarity learning, resulting in the ProDis-ContSHC algorithm. Among different contextual dissimilarity learning approaches that can be used to compare a pair of proteins, ProDis-ContSHC provides the highest accuracy. Finally, ProDis-ContSHC compares favorably with other methods reported in the recent literature.</p
    • …
    corecore