2 research outputs found

    Cyber threat ontology and adversarial machine learning attacks: analysis and prediction perturbance

    Get PDF
    Machine learning has been used in the cybersecurity domain to predict cyberattack trends. However, adversaries can inject malicious data into the dataset during training and testing to cause perturbance and predict false narratives. It has become challenging to analyse and predicate cyberattack correlations due to their fuzzy nature and lack of understanding of the threat landscape. Thus, it is imperative to use cyber threat ontology (CTO) concepts to extract relevant attack instances in CSC security for knowledge representation. This paper explores the challenges of CTO and adversarial machine learning (AML) attacks for threat prediction to improve cybersecurity. The novelty contributions are threefold. First, CTO concepts are considered for semantic mapping and definition of relationships for explicit knowledge of threat indicators. Secondly, AML techniques are deployed maliciously to manipulate algorithms during training and testing to predict false classifications models. Finally, we discuss the performance analysis of the classification models and how CTO provides automated means. The result shows that analysis of AML attacks and CTO concepts could be used for validating a mediated schema for specific vulnerabilities

    Defacement Detection with Passive Adversaries

    Get PDF
    A novel approach to defacement detection is proposed in this paper, addressing explicitly the possible presence of a passive adversary. Defacement detection is an important security measure for Web Sites and Applications, aimed at avoiding unwanted modifications that would result in significant reputational damage. As in many other anomaly detection contexts, the algorithm used to identify possible defacements is obtained via an Adversarial Machine Learning process. We consider an exploratory setting, where the adversary can observe the detector’s alarm-generating behaviour, with the purpose of devising and injecting defacements that will pass undetected. It is then necessary to make to learning process unpredictable, so that the adversary will be unable to replicate it and predict the classifier’s behaviour. We achieve this goal by introducing a secret key—a key that our adversary does not know. The key will influence the learning process in a number of different ways, that are precisely defined in this paper. This includes the subset of examples and features that are actually used, the time of learning and testing, as well as the learning algorithm’s hyper-parameters. This learning methodology is successfully applied in this context, by using the system with both real and artificially modified Web sites. A year-long experimentation is also described, referred to the monitoring of the new Web Site of a major manufacturing company
    corecore