140 research outputs found

    Decentralized Differentially Private Without-Replacement Stochastic Gradient Descent

    Full text link
    While machine learning has achieved remarkable results in a wide variety of domains, the training of models often requires large datasets that may need to be collected from different individuals. As sensitive information may be contained in the individual's dataset, sharing training data may lead to severe privacy concerns. Therefore, there is a compelling need to develop privacy-aware machine learning methods, for which one effective approach is to leverage the generic framework of differential privacy. Considering that stochastic gradient descent (SGD) is one of the mostly adopted methods for large-scale machine learning problems, two decentralized differentially private SGD algorithms are proposed in this work. Particularly, we focus on SGD without replacement due to its favorable structure for practical implementation. In addition, both privacy and convergence analysis are provided for the proposed algorithms. Finally, extensive experiments are performed to verify the theoretical results and demonstrate the effectiveness of the proposed algorithms

    Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees

    Full text link
    Federated learning (FL) has emerged as a prominent distributed learning paradigm. FL entails some pressing needs for developing novel parameter estimation approaches with theoretical guarantees of convergence, which are also communication efficient, differentially private and Byzantine resilient in the heterogeneous data distribution settings. Quantization-based SGD solvers have been widely adopted in FL and the recently proposed SIGNSGD with majority vote shows a promising direction. However, no existing methods enjoy all the aforementioned properties. In this paper, we propose an intuitively-simple yet theoretically-sound method based on SIGNSGD to bridge the gap. We present Stochastic-Sign SGD which utilizes novel stochastic-sign based gradient compressors enabling the aforementioned properties in a unified framework. We also present an error-feedback variant of the proposed Stochastic-Sign SGD which further improves the learning performance in FL. We test the proposed method with extensive experiments using deep neural networks on the MNIST dataset and the CIFAR-10 dataset. The experimental results corroborate the effectiveness of the proposed method

    TernaryVote: Differentially Private, Communication Efficient, and Byzantine Resilient Distributed Optimization on Heterogeneous Data

    Full text link
    Distributed training of deep neural networks faces three critical challenges: privacy preservation, communication efficiency, and robustness to fault and adversarial behaviors. Although significant research efforts have been devoted to addressing these challenges independently, their synthesis remains less explored. In this paper, we propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously. We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm. Particularly, in terms of privacy guarantees, compared to the existing sign-based approach StoSign, the proposed method improves the dimension dependence on the gradient size and enjoys privacy amplification by mini-batch sampling while ensuring a comparable convergence rate. We also prove that TernaryVote is robust when less than 50% of workers are blind attackers, which matches that of SIGNSGD with majority vote. Extensive experimental results validate the effectiveness of the proposed algorithm

    Toxoplasma gondii cathepsin proteases are undeveloped prominent vaccine antigens against toxoplasmosis

    Get PDF
    BACKGROUND: Toxoplasma gondii, an obligate intracellular apicomplexan parasite, infects a wide range of warm-blooded animals including humans. T. gondii expresses five members of the C1 family of cysteine proteases, including cathepsin B-like (TgCPB) and cathepsin L-like (TgCPL) proteins. TgCPB is involved in ROP protein maturation and parasite invasion, whereas TgCPL contributes to proteolytic maturation of proTgM2AP and proTgMIC3. TgCPL is also associated with the residual body in the parasitophorous vacuole after cell division has occurred. Both of these proteases are potential therapeutic targets in T. gondii. The aim of this study was to investigate TgCPB and TgCPL for their potential as DNA vaccines against T. gondii. METHODS: Using bioinformatics approaches, we analyzed TgCPB and TgCPL proteins and identified several linear-B cell epitopes and potential Th-cell epitopes in them. Based on these results, we assembled two single-gene constructs (TgCPB and TgCPL) and a multi-gene construct (pTgCPB/TgCPL) with which to immunize BALB/c mice and test their effectiveness as DNA vaccines. RESULTS: TgCPB and TgCPL vaccines elicited strong humoral and cellular immune responses in mice, both of which were Th-1 cell mediated. In addition, all of the vaccines protected the mice against infection with virulent T. gondii RH tachyzoites, with the multi-gene vaccine (pTgCPB/TgCPL) providing the highest level of protection. CONCLUSIONS: T. gondii CPB and CPL proteases are strong candidates for development as novel DNA vaccines
    • …
    corecore