468 research outputs found

    Denial-of-service attack modelling and detection for HTTP/2 services

    Get PDF
    Businesses and society alike have been heavily dependent on Internet-based services, albeit with experiences of constant and annoying disruptions caused by the adversary class. A malicious attack that can prevent establishment of Internet connections to web servers, initiated from legitimate client machines, is termed as a Denial of Service (DoS) attack; volume and intensity of which is rapidly growing thanks to the readily available attack tools and the ever-increasing network bandwidths. A majority of contemporary web servers are built on the HTTP/1.1 communication protocol. As a consequence, all literature found on DoS attack modelling and appertaining detection techniques, addresses only HTTP/1.x network traffic. This thesis presents a model of DoS attack traffic against servers employing the new communication protocol, namely HTTP/2. The HTTP/2 protocol significantly differs from its predecessor and introduces new messaging formats and data exchange mechanisms. This creates an urgent need to understand how malicious attacks including Denial of Service, can be launched against HTTP/2 services. Moreover, the ability of attackers to vary the network traffic models to stealthy affects web services, thereby requires extensive research and modelling. This research work not only provides a novel model for DoS attacks against HTTP/2 services, but also provides a model of stealthy variants of such attacks, that can disrupt routine web services. Specifically, HTTP/2 traffic patterns that consume computing resources of a server, such as CPU utilisation and memory consumption, were thoroughly explored and examined. The study presents four HTTP/2 attack models. The first being a flooding-based attack model, the second being a distributed model, the third and fourth are variant DoS attack models. The attack traffic analysis conducted in this study employed four machine learning techniques, namely Naïve Bayes, Decision Tree, JRip and Support Vector Machines. The HTTP/2 normal traffic model portrays online activities of human users. The model thus formulated was employed to also generate flash-crowd traffic, i.e. a large volume of normal traffic that incapacitates a web server, similar in fashion to a DoS attack, albeit with non-malicious intent. Flash-crowd traffic generated based on the defined model was used to populate the dataset of legitimate network traffic, to fuzz the machine learning-based attack detection process. The two variants of DoS attack traffic differed in terms of the traffic intensities and the inter-packet arrival delays introduced to better analyse the type and quality of DoS attacks that can be launched against HTTP/2 services. A detailed analysis of HTTP/2 features is also presented to rank relevant network traffic features for all four traffic models presented. These features were ranked based on legitimate as well as attack traffic observations conducted in this study. The study shows that machine learning-based analysis yields better classification performance, i.e. lower percentage of incorrectly classified instances, when the proposed HTTP/2 features are employed compared to when HTTP/1.1 features alone are used. The study shows how HTTP/2 DoS attack can be modelled, and how future work can extend the proposed model to create variant attack traffic models that can bypass intrusion-detection systems. Likewise, as the Internet traffic and the heterogeneity of Internet-connected devices are projected to increase significantly, legitimate traffic can yield varying traffic patterns, demanding further analysis. The significance of having current legitimate traffic datasets, together with the scope to extend the DoS attack models presented herewith, suggest that research in the DoS attack analysis and detection area will benefit from the work presented in this thesis

    Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection

    Get PDF
    Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the Adaptative Perturbation Pattern Method (A2PM) to fulfill these constraints in a gray-box setting. A2PM relies on pattern sequences that are independently adapted to the characteristics of each class to create valid and coherent data perturbations. The proposed method was evaluated in a cybersecurity case study with two scenarios: Enterprise and Internet of Things (IoT) networks. Multilayer Perceptron (MLP) and Random Forest (RF) classifiers were created with regular and adversarial training, using the CIC-IDS2017 and IoT-23 datasets. In each scenario, targeted and untargeted attacks were performed against the classifiers, and the generated examples were compared with the original network traffic flows to assess their realism. The obtained results demonstrate that A2PM provides a scalable generation of realistic adversarial examples, which can be advantageous for both adversarial training and attacks.Comment: 18 pages, 6 tables, 10 figures, Future Internet journa

    Realistic adversarial machine learning to improve network intrusion detection

    Get PDF
    Modern organizations can significantly benefit from the use of Artificial Intelligence (AI), and more specifically Machine Learning (ML), to tackle the growing number and increasing sophistication of cyber-attacks targeting their business processes. However, there are several technological and ethical challenges that undermine the trustworthiness of AI. One of the main challenges is the lack of robustness, which is an essential property to ensure that ML is used in a secure way. Improving robustness is no easy task because ML is inherently susceptible to adversarial examples: data samples with subtle perturbations that cause unexpected behaviors in ML models. ML engineers and security practitioners still lack the knowledge and tools to prevent such disruptions, so adversarial examples pose a major threat to ML and to the intelligent Network Intrusion Detection (NID) systems that rely on it. This thesis presents a methodology for a trustworthy adversarial robustness analysis of multiple ML models, and an intelligent method for the generation of realistic adversarial examples in complex tabular data domains like the NID domain: Adaptative Perturbation Pattern Method (A2PM). It is demonstrated that a successful adversarial attack is not guaranteed to be a successful cyber-attack, and that adversarial data perturbations can only be realistic if they are simultaneously valid and coherent, complying with the domain constraints of a real communication network and the class-specific constraints of a certain cyber-attack class. A2PM can be used for adversarial attacks, to iteratively cause misclassifications, and adversarial training, to perform data augmentation with slightly perturbed data samples. Two case studies were conducted to evaluate its suitability for the NID domain. The first verified that the generated perturbations preserved both validity and coherence in Enterprise and Internet-of Things (IoT) network scenarios, achieving realism. The second verified that adversarial training with simple perturbations enables the models to retain a good generalization to regular IoT network traffic flows, in addition to being more robust to adversarial examples. The key takeaway of this thesis is: ML models can be incredibly valuable to improve a cybersecurity system, but their own vulnerabilities must not be disregarded. It is essential to continue the research efforts to improve the security and trustworthiness of ML and of the intelligent systems that rely on it.Organizações modernas podem beneficiar significativamente do uso de Inteligência Artificial (AI), e mais especificamente Aprendizagem Automática (ML), para enfrentar a crescente quantidade e sofisticação de ciberataques direcionados aos seus processos de negócio. No entanto, há vários desafios tecnológicos e éticos que comprometem a confiabilidade da AI. Um dos maiores desafios é a falta de robustez, que é uma propriedade essencial para garantir que se usa ML de forma segura. Melhorar a robustez não é uma tarefa fácil porque ML é inerentemente suscetível a exemplos adversos: amostras de dados com perturbações subtis que causam comportamentos inesperados em modelos ML. Engenheiros de ML e profissionais de segurança ainda não têm o conhecimento nem asferramentas necessárias para prevenir tais disrupções, por isso os exemplos adversos representam uma grande ameaça a ML e aos sistemas de Deteção de Intrusões de Rede (NID) que dependem de ML. Esta tese apresenta uma metodologia para uma análise da robustez de múltiplos modelos ML, e um método inteligente para a geração de exemplos adversos realistas em domínios de dados tabulares complexos como o domínio NID: Método de Perturbação com Padrões Adaptativos (A2PM). É demonstrado que um ataque adverso bem-sucedido não é garantidamente um ciberataque bem-sucedido, e que as perturbações adversas só são realistas se forem simultaneamente válidas e coerentes, cumprindo as restrições de domínio de uma rede de computadores real e as restrições específicas de uma certa classe de ciberataque. A2PM pode ser usado para ataques adversos, para iterativamente causar erros de classificação, e para treino adverso, para realizar aumento de dados com amostras ligeiramente perturbadas. Foram efetuados dois casos de estudo para avaliar a sua adequação ao domínio NID. O primeiro verificou que as perturbações preservaram tanto a validade como a coerência em cenários de redes Empresariais e Internet-das-Coisas (IoT), alcançando o realismo. O segundo verificou que o treino adverso com perturbações simples permitiu aos modelos reter uma boa generalização a fluxos de tráfego de rede IoT, para além de serem mais robustos contra exemplos adversos. A principal conclusão desta tese é: os modelos ML podem ser incrivelmente valiosos para melhorar um sistema de cibersegurança, mas as suas próprias vulnerabilidades não devem ser negligenciadas. É essencial continuar os esforços de investigação para melhorar a segurança e a confiabilidade de ML e dos sistemas inteligentes que dependem de ML

    Neural Network Models for TCP - SYN Flood Denial of Service Attack Detection With Source and Destination Anonymization

    Get PDF
    The Internet has become a necessity in today\u27s digital world, and making it secure is a pressing concern. Hackers are investing ever-increasing efforts to compromise Internet nodes with novel techniques. According to Forbes, every minute, $ 2,900,000 is lost to cybercrime. A common cyber-attack is Denial of Service (DoS) or Distributed Denial of Service (DDoS) attacks, which may bring a network to a standstill, and unless mitigated, network services could be halted for an extended period. The attack can occur at any layer of the OSI model. This thesis focuses on SYN Flood DoS/DDoS attacks, also known as TCP Flood attacks, and studies the use of artificial neural networks to detect the attacks. Specific neural network models used in this thesis are the Gated Recurrent Units (GRU), the Long Short-Term Memory (LSTM), and a semi-supervised model on label propagation. All neural network models detect attacks by analyzing the individual hexadecimal values in the packet header. A novelty of the approach followed in this thesis is that the neural networks do not consider the lexical values of the network packet (MAC addresses, IP addresses, and port numbers) as input features in their traffic analysis. Instead, the neural network models are designed and trained to detect malicious traffic based on the time pattern of TCP flags. The neural networks base their analysis of traffic on time-sequenced patterns. An important hyperparameter discussed in this paper is the size of the lookup window, that is, the number of past packets the model can access to predict the next packet. Evaluation results based on datasets presented in this thesis show that the accuracies of the GRU, CNN/LSTM, and label propagation models are 81%, 93%, and 96%, respectively

    Deep learning with focal loss approach for attacks classification

    Get PDF
    The rapid development of deep learning improves the detection and classification of attacks on intrusion detection systems. However, the unbalanced data issue increases the complexity of the architecture model. This study proposes a novel deep learning model to overcome the problem of classifying multi-class attacks. The deep learning model consists of two stages. The pre-tuning stage uses automatic feature extraction with a deep autoencoder. The second stage is fine-tuning using deep neural network classifiers with fully connected layers. To reduce imbalanced class data, the feature extraction was implemented using the deep autoencoder and improved focal loss function in the classifier. The model was evaluated using 3 loss functions, including cross-entropy, weighted cross-entropy, and focal losses. The results could correct the class imbalance in deep learning-based classifications. Attack classification was achieved using automatic extraction with the focal loss on the CSE-CIC-IDS2018 dataset is a high-quality classifier with 98.38% precision, 98.27% sensitivity, and 99.82% specificity

    Security at the Edge for Resource-Limited IoT Devices

    Get PDF
    The Internet of Things (IoT) is rapidly growing, with an estimated 14.4 billion active endpoints in 2022 and a forecast of approximately 30 billion connected devices by 2027. This proliferation of IoT devices has come with significant security challenges, including intrinsic security vulnerabilities, limited computing power, and the absence of timely security updates. Attacks leveraging such shortcomings could lead to severe consequences, including data breaches and potential disruptions to critical infrastructures. In response to these challenges, this research paper presents the IoT Proxy, a modular component designed to create a more resilient and secure IoT environment, especially in resource-limited scenarios. The core idea behind the IoT Proxy is to externalize security-related aspects of IoT devices by channeling their traffic through a secure network gateway equipped with different Virtual Network Security Functions (VNSFs). Our solution includes a Virtual Private Network (VPN) terminator and an Intrusion Prevention System (IPS) that uses a machine learning-based technique called oblivious authentication to identify connected devices. The IoT Proxy’s modular, scalable, and externalized security approach creates a more resilient and secure IoT environment, especially for resource-limited IoT devices. The promising experimental results from laboratory testing demonstrate the suitability of IoT Proxy to secure real-world IoT ecosystems

    Reviewing the effectiveness of artificial intelligence techniques against cyber security risks

    Get PDF
    The rapid increase in malicious cyber-criminal activities has made the field of cybersecurity a crucial research discipline. Over the areas, the advancement in information technology has enabled cybercriminals to launch increasingly sophisticated attacks that can endanger cybersecurity. Due to this, traditional cybersecurity solutions have become ineffective against emerging cyberattacks. However, the advent of Artificial Intelligence (AI) – particularly Machine Learning (ML) and Deep Learning (DL) – and cryptographic techniques have shown promising results in countering the evolving cyber threats caused by adversaries. Therefore, in this study, AI's potential in enhancing cybersecurity solutions is discussed. Additionally, the study has provided an in-depth analysis of different AI-based techniques that can detect, analyse, and prevent cyber threats. In the end, the present study has also discussed future research opportunities that are linked with the development of AI systems in the field of cybersecurity

    Deteção de ataques de negação de serviços distribuídos na origem

    Get PDF
    From year to year new records of the amount of traffic in an attack are established, which demonstrate not only the constant presence of distributed denialof-service attacks, but also its evolution, demarcating itself from the other network threats. The increasing importance of resource availability alongside the security debate on network devices and infrastructures is continuous, given the preponderant role in both the home and corporate domains. In the face of the constant threat, the latest network security systems have been applying pattern recognition techniques to infer, detect, and react more quickly and assertively. This dissertation proposes methodologies to infer network activities patterns, based on their traffic: follows a behavior previously defined as normal, or if there are deviations that raise suspicions about the normality of the action in the network. It seems that the future of network defense systems continues in this direction, not only by increasing amount of traffic, but also by the diversity of actions, services and entities that reflect different patterns, thus contributing to the detection of anomalous activities on the network. The methodologies propose the collection of metadata, up to the transport layer of the osi model, which will then be processed by the machien learning algorithms in order to classify the underlying action. Intending to contribute beyond denial-of-service attacks and the network domain, the methodologies were described in a generic way, in order to be applied in other scenarios of greater or less complexity. The third chapter presents a proof of concept with attack vectors that marked the history and a few evaluation metrics that allows to compare the different classifiers as to their success rate, given the various activities in the network and inherent dynamics. The various tests show flexibility, speed and accuracy of the various classification algorithms, setting the bar between 90 and 99 percent.De ano para ano são estabelecidos novos recordes de quantidade de tráfego num ataque, que demonstram não só a presença constante de ataques de negação de serviço distribuídos, como também a sua evolução, demarcando-se das outras ameaças de rede. A crescente importância da disponibilidade de recursos a par do debate sobre a segurança nos dispositivos e infraestruturas de rede é contínuo, dado o papel preponderante tanto no dominio doméstico como no corporativo. Face à constante ameaça, os sistemas de segurança de rede mais recentes têm vindo a aplicar técnicas de reconhecimento de padrões para inferir, detetar e reagir de forma mais rápida e assertiva. Esta dissertação propõe metodologias para inferir padrões de atividades na rede, tendo por base o seu tráfego: se segue um comportamento previamente definido como normal, ou se existem desvios que levantam suspeitas sobre normalidade da ação na rede. Tudo indica que o futuro dos sistemas de defesa de rede continuará neste sentido, servindo-se não só do crescente aumento da quantidade de tráfego, como também da diversidade de ações, serviços e entidades que refletem padrões distintos contribuindo assim para a deteção de atividades anómalas na rede. As metodologias propõem a recolha de metadados, até á camada de transporte, que seguidamente serão processados pelos algoritmos de aprendizagem automática com o objectivo de classificar a ação subjacente. Pretendendo que o contributo fosse além dos ataques de negação de serviço e do dominio de rede, as metodologias foram descritas de forma tendencialmente genérica, de forma a serem aplicadas noutros cenários de maior ou menos complexidade. No quarto capítulo é apresentada uma prova de conceito com vetores de ataques que marcaram a história e, algumas métricas de avaliação que permitem comparar os diferentes classificadores quanto à sua taxa de sucesso, face às várias atividades na rede e inerentes dinâmicas. Os vários testes mostram flexibilidade, rapidez e precisão dos vários algoritmos de classificação, estabelecendo a fasquia entre os 90 e os 99 por cento.Mestrado em Engenharia de Computadores e Telemátic

    Role of Artificial Intelligence in the Internet of Things (IoT) Cybersecurity

    Get PDF
    In recent years, the use of the Internet of Things (IoT) has increased exponentially, and cybersecurity concerns have increased along with it. On the cutting edge of cybersecurity is Artificial Intelligence (AI), which is used for the development of complex algorithms to protect networks and systems, including IoT systems. However, cyber-attackers have figured out how to exploit AI and have even begun to use adversarial AI in order to carry out cybersecurity attacks. This review paper compiles information from several other surveys and research papers regarding IoT, AI, and attacks with and against AI and explores the relationship between these three topics with the purpose of comprehensively presenting and summarizing relevant literature in these fields
    corecore