281 research outputs found

    Thermal strength and transient dynamics analysis of a diesel engine piston

    Get PDF
    As the research object is to set up four-stroke direct injection diesel engine pistons, with the research method of thermo-mechanical coupling, a three-dimensional finite element analysis model is established. Calculate transient heat transfer coefficient and the transient gas temperature. Piston stress is calculated under the conditions of thermal load, mechanical load and the thermal-mechanical coupling load. Results show that, the piston safety, the main cause of the piston deformation and the great stress is the temperature so it is feasible to further decrease the piston temperature with structure optimization

    Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features

    Full text link
    Recent studies have demonstrated the susceptibility of deep neural networks to backdoor attacks. Given a backdoored model, its prediction of a poisoned sample with trigger will be dominated by the trigger information, though trigger information and benign information coexist. Inspired by the mechanism of the optical polarizer that a polarizer could pass light waves with particular polarizations while filtering light waves with other polarizations, we propose a novel backdoor defense method by inserting a learnable neural polarizer into the backdoored model as an intermediate layer, in order to purify the poisoned sample via filtering trigger information while maintaining benign information. The neural polarizer is instantiated as one lightweight linear transformation layer, which is learned through solving a well designed bi-level optimization problem, based on a limited clean dataset. Compared to other fine-tuning-based defense methods which often adjust all parameters of the backdoored model, the proposed method only needs to learn one additional layer, such that it is more efficient and requires less clean data. Extensive experiments demonstrate the effectiveness and efficiency of our method in removing backdoors across various neural network architectures and datasets, especially in the case of very limited clean data

    Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples

    Full text link
    Backdoor attacks are serious security threats to machine learning models where an adversary can inject poisoned samples into the training set, causing a backdoored model which predicts poisoned samples with particular triggers to particular target classes, while behaving normally on benign samples. In this paper, we explore the task of purifying a backdoored model using a small clean dataset. By establishing the connection between backdoor risk and adversarial risk, we derive a novel upper bound for backdoor risk, which mainly captures the risk on the shared adversarial examples (SAEs) between the backdoored model and the purified model. This upper bound further suggests a novel bi-level optimization problem for mitigating backdoor using adversarial training techniques. To solve it, we propose Shared Adversarial Unlearning (SAU). Specifically, SAU first generates SAEs, and then, unlearns the generated SAEs such that they are either correctly classified by the purified model and/or differently classified by the two models, such that the backdoor effect in the backdoored model will be mitigated in the purified model. Experiments on various benchmark datasets and network architectures show that our proposed method achieves state-of-the-art performance for backdoor defense

    Tectonic History and Coalbed Gas Genesis

    Get PDF
    • …
    corecore