1,490 research outputs found

    Soft error in FPGA-implemented asynchronous circuits

    Get PDF
    In this paper, we investigate the mechanism of soft error generation and propagation in asynchronous circuits which are implemented on FPGAs. The effects of the soft errors on Quasi-delay-insensitive (QDI) asynchronous circuits are analyzed. The results show that it is much easier to detect the soft error in asynchronous circuits implemented on FPGAs so that FPGAs can be reprogrammed, compared with traditional synchronous circuits

    Robust federated learning with noisy communication

    Get PDF
    Federated learning is a communication-efficient training process that alternate between local training at the edge devices and averaging of the updated local model at the center server. Nevertheless, it is impractical to achieve perfect acquisition of the local models in wireless communication due to the noise, which also brings serious effect on federated learning. To tackle this challenge in this paper, we propose a robust design for federated learning to decline the effect of noise. Considering the noise in two aforementioned steps, we first formulate the training problem as a parallel optimization for each node under the expectation-based model and worst-case model. Due to the non-convexity of the problem, regularizer approximation method is proposed to make it tractable. Regarding the worst-case model, we utilize the sampling-based successive convex approximation algorithm to develop a feasible training scheme to tackle the unavailable maxima or minima noise condition and the non-convex issue of the objective function. Furthermore, the convergence rates of both new designs are analyzed from a theoretical point of view. Finally, the improvement of prediction accuracy and the reduction of loss function value are demonstrated via simulation for the proposed designs

    Private Model Compression via Knowledge Distillation

    Full text link
    The soaring demand for intelligent mobile applications calls for deploying powerful deep neural networks (DNNs) on mobile devices. However, the outstanding performance of DNNs notoriously relies on increasingly complex models, which in turn is associated with an increase in computational expense far surpassing mobile devices' capacity. What is worse, app service providers need to collect and utilize a large volume of users' data, which contain sensitive information, to build the sophisticated DNN models. Directly deploying these models on public mobile devices presents prohibitive privacy risk. To benefit from the on-device deep learning without the capacity and privacy concerns, we design a private model compression framework RONA. Following the knowledge distillation paradigm, we jointly use hint learning, distillation learning, and self learning to train a compact and fast neural network. The knowledge distilled from the cumbersome model is adaptively bounded and carefully perturbed to enforce differential privacy. We further propose an elegant query sample selection method to reduce the number of queries and control the privacy loss. A series of empirical evaluations as well as the implementation on an Android mobile device show that RONA can not only compress cumbersome models efficiently but also provide a strong privacy guarantee. For example, on SVHN, when a meaningful (9.83,10−6)(9.83,10^{-6})-differential privacy is guaranteed, the compact model trained by RONA can obtain 20×\times compression ratio and 19×\times speed-up with merely 0.97% accuracy loss.Comment: Conference version accepted by AAAI'1
    • …
    corecore