146 research outputs found

    Mitigating the Accuracy-Robustness Trade-off via Multi-Teacher Adversarial Distillation

    Full text link
    Adversarial training is a practical approach for improving the robustness of deep neural networks against adversarial attacks. Although bringing reliable robustness, the performance toward clean examples is negatively affected after adversarial training, which means a trade-off exists between accuracy and robustness. Recently, some studies have tried to use knowledge distillation methods in adversarial training, achieving competitive performance in improving the robustness but the accuracy for clean samples is still limited. In this paper, to mitigate the accuracy-robustness trade-off, we introduce the Multi-Teacher Adversarial Robustness Distillation (MTARD) to guide the model's adversarial training process by applying a strong clean teacher and a strong robust teacher to handle the clean examples and adversarial examples, respectively. During the optimization process, to ensure that different teachers show similar knowledge scales, we design the Entropy-Based Balance algorithm to adjust the teacher's temperature and keep the teachers' information entropy consistent. Besides, to ensure that the student has a relatively consistent learning speed from multiple teachers, we propose the Normalization Loss Balance algorithm to adjust the learning weights of different types of knowledge. A series of experiments conducted on public datasets demonstrate that MTARD outperforms the state-of-the-art adversarial training and distillation methods against various adversarial attacks

    Learning Specialized Activation Functions for Physics-informed Neural Networks

    Full text link
    Physics-informed neural networks (PINNs) are known to suffer from optimization difficulty. In this work, we reveal the connection between the optimization difficulty of PINNs and activation functions. Specifically, we show that PINNs exhibit high sensitivity to activation functions when solving PDEs with distinct properties. Existing works usually choose activation functions by inefficient trial-and-error. To avoid the inefficient manual selection and to alleviate the optimization difficulty of PINNs, we introduce adaptive activation functions to search for the optimal function when solving different problems. We compare different adaptive activation functions and discuss their limitations in the context of PINNs. Furthermore, we propose to tailor the idea of learning combinations of candidate activation functions to the PINNs optimization, which has a higher requirement for the smoothness and diversity on learned functions. This is achieved by removing activation functions which cannot provide higher-order derivatives from the candidate set and incorporating elementary functions with different properties according to our prior knowledge about the PDE at hand. We further enhance the search space with adaptive slopes. The proposed adaptive activation function can be used to solve different PDE systems in an interpretable way. Its effectiveness is demonstrated on a series of benchmarks. Code is available at https://github.com/LeapLabTHU/AdaAFforPINNs

    Fix-and-Optimize and Variable Neighborhood Search Approaches for Stochastic Multi-Item Capacitated Lot-Sizing Problems

    Get PDF
    We discuss stochastic multi-item capacitated lot-sizing problems with and without setup carryovers (also known as link lot size), S-MICLSP and S-MICLSP-L. The two models are motivated from a real-world steel enterprise. To overcome the nonlinearity of the models, a piecewise linear approximation method is proposed. We develop a new fix-and-optimize (FO) approach to solve the approximated models. Compared with the existing FO approach(es), our FO is based on the concept of “k-degree-connection” for decomposing the problems. Furthermore, we also propose an integrative approach combining our FO and variable neighborhood search (FO-VNS), which can improve the solution quality of our FO approach by diversifying the search space. Numerical experiments are performed on the instances following the nature of realistic steel products. Our approximation method is shown to be efficient. The results also show that the proposed FO and FO-VNS approaches significantly outperform the recent FO approaches, and the FO-VNS approaches can be more outstanding on the solution quality with moderate computational effort

    Hundreds Guide Millions: Adaptive Offline Reinforcement Learning with Expert Guidance

    Full text link
    Offline reinforcement learning (RL) optimizes the policy on a previously collected dataset without any interactions with the environment, yet usually suffers from the distributional shift problem. To mitigate this issue, a typical solution is to impose a policy constraint on a policy improvement objective. However, existing methods generally adopt a ``one-size-fits-all'' practice, i.e., keeping only a single improvement-constraint balance for all the samples in a mini-batch or even the entire offline dataset. In this work, we argue that different samples should be treated with different policy constraint intensities. Based on this idea, a novel plug-in approach named Guided Offline RL (GORL) is proposed. GORL employs a guiding network, along with only a few expert demonstrations, to adaptively determine the relative importance of the policy improvement and policy constraint for every sample. We theoretically prove that the guidance provided by our method is rational and near-optimal. Extensive experiments on various environments suggest that GORL can be easily installed on most offline RL algorithms with statistically significant performance improvements

    Collaborative Edge Caching: a Meta Reinforcement Learning Approach with Edge Sampling

    Full text link
    Current learning-based edge caching schemes usually suffer from dynamic content popularity, e.g., in the emerging short video platforms, users' request patterns shift significantly over time and across different edges. An intuitive solution for a specific local edge cache is to collect more request histories from other edge caches. However, uniformly merging these request histories may not perform satisfactorily due to heterogeneous content distributions on different edges. To solve this problem, we propose a collaborative edge caching framework. First, we design a meta-learning-based collaborative strategy to guarantee that the local model can timely meet the continually changing content popularity. Then, we design an edge sampling method to select more "valuable" neighbor edges to participate in the local training. To evaluate the proposed framework, we conduct trace-driven experiments to demonstrate the effectiveness of our design: it improves the average cache hit rate by up to 10.12%10.12\% (normalized) compared with other baselines.Comment: Published on IEEE International Conference on Multimedia and Expo 2023 (ICME2023
    • …
    corecore