The security of Internet of Things (IoT) ecosystems is crucial for maintaining user trust and facilitating widespread adoption. Machine Learning (ML) based Intrusion Detection and Prevention Systems (IDS/IPS) are frequently used to protect IoT networks, yet they are susceptible to adversarial attacks (AAs) and lack formal verifiability of their robustness. It has been demonstrated that meticulously designed AAs can alter the classification of ML-based IDSs, rendering them ineffective and posing risks to lives and physical infrastructure in safety-critical systems. This paper addresses these issues by introducing PROTECTION: a Provably RObust Intrusion DeTECTion system for IoT through recursive delegatION, which combines formal methods with ensemble machine learning. To enhance the robustness of ensemble ML models, we utilise Satisfiability-Modulo-Theory (SMT) to formally verify the classifier’s robustness, ensuring that output probabilities remain outside a thick decision boundary even when small perturbations are applied to the inputs. If a classifier fails to meet this criterion on any training sample, we reassign the training task to other classifiers that are iteratively trained until all samples are trained in accordance with the required property. The efficacy of the final ensemble model is thoroughly tested against various input perturbations and AAs using SMT based formal verification
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.