1 research outputs found

    Enhancing Certifiable Robustness via a Deep Model Ensemble

    Full text link
    We propose an algorithm to enhance certified robustness of a deep model ensemble by optimally weighting each base model. Unlike previous works on using ensembles to empirically improve robustness, our algorithm is based on optimizing a guaranteed robustness certificate of neural networks. Our proposed ensemble framework with certified robustness, RobBoost, formulates the optimal model selection and weighting task as an optimization problem on a lower bound of classification margin, which can be efficiently solved using coordinate descent. Experiments show that our algorithm can form a more robust ensemble than naively averaging all available models using robustly trained MNIST or CIFAR base models. Additionally, our ensemble typically has better accuracy on clean (unperturbed) data. RobBoost allows us to further improve certified robustness and clean accuracy by creating an ensemble of already certified models.Comment: This is an extended version of ICLR 2019 Safe Machine Learning Workshop (SafeML) paper, "RobBoost: A provable approach to boost the robustness of deep model ensemble". May 6, 2019, New Orleans, LA, US
    corecore