3 research outputs found

    Robust Reasoning for Autonomous Cyber-Physical Systems in Dynamic Environments

    Get PDF
    Autonomous cyber-physical systems, CPS, in dynamic environments must work impeccably. The cyber-physical systems must handle tasks consistently and trustworthily, i.e., with a robust behavior. Robust systems, in general, require making valid and solid decisions using one or a combination of robust reasoning strategies, algorithms, and robustness analysis. However, in dynamic environments, data can be incomplete, skewed, contradictory, and redundant impacting the reasoning. Basing decisions on these data can lead to inconsistent, irrational, and unreasonable cyber-physical systems' movements, adversely impacting the system’s reliability and integrity. This paper presents the assessment of robust reasoning for autonomous cyber-physical systems in dynamic environments. In this work, robust reasoning is considered as 1) the capability of drawing conclusions with available data by applying classical and non-classical reasoning strategies and algorithms and 2) act and react robustly and safely in dynamic environments by employing robustness analysis to provide options on possible actions and evaluate alternative decisions. The result of the research shows that different common existing strategies, algorithms and analyses can be provided together with a comparison of their applicabilities, benefits, and drawbacks in the context of cyber-physical systems operating in dynamically changing environments. The conclusion is that robust reasoning in cyber-physical systems can handle dynamic environments. Moreover, combining these strategies and algorithms with robustness analysis can support achieving robust behavior in autonomous cyber-physical systems while operating in dynamically changing environments

    RAB: Provable Robustness Against Backdoor Attacks

    Full text link
    Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks, including evasion and backdoor (poisoning) attacks. On the defense side, there have been intensive efforts on improving both empirical and provable robustness against evasion attacks; however, provable robustness against backdoor attacks still remains largely unexplored. In this paper, we focus on certifying the machine learning model robustness against general threat models, especially backdoor attacks. We first provide a unified framework via randomized smoothing techniques and show how it can be instantiated to certify the robustness against both evasion and backdoor attacks. We then propose the first robust training process, RAB, to smooth the trained model and certify its robustness against backdoor attacks. We derive the robustness bound for machine learning models trained with RAB, and prove that our robustness bound is tight. In addition, we show that it is possible to train the robust smoothed models efficiently for simple models such as K-nearest neighbor classifiers, and we propose an exact smooth-training algorithm which eliminates the need to sample from a noise distribution for such models. Empirically, we conduct comprehensive experiments for different machine learning (ML) models such as DNNs, differentially private DNNs, and K-NN models on MNIST, CIFAR-10 and ImageNet datasets, and provide the first benchmark for certified robustness against backdoor attacks. In addition, we evaluate K-NN models on a spambase tabular dataset to demonstrate the advantages of the proposed exact algorithm. Both the theoretic analysis and the comprehensive evaluation on diverse ML models and datasets shed lights on further robust learning strategies against general training time attacks.Comment: 31 pages, 5 figures, 7 table
    corecore