3,459 research outputs found
Assessing and augmenting SCADA cyber security: a survey of techniques
SCADA systems monitor and control critical infrastructures of national importance such as power generation and distribution, water supply, transportation networks, and manufacturing facilities. The pervasiveness, miniaturisations and declining costs of internet connectivity have transformed these systems from strictly isolated to highly interconnected networks. The connectivity provides immense benefits such as reliability, scalability and remote connectivity, but at the same time exposes an otherwise isolated and secure system, to global cyber security threats. This inevitable transformation to highly connected systems thus necessitates effective security safeguards to be in place as any compromise or downtime of SCADA systems can have severe economic, safety and security ramifications. One way to ensure vital asset protection is to adopt a viewpoint similar to an attacker to determine weaknesses and loopholes in defences. Such mind sets help to identify and fix potential breaches before their exploitation. This paper surveys tools and techniques to uncover SCADA system vulnerabilities. A comprehensive review of the selected approaches is provided along with their applicability
GCNIDS: Graph Convolutional Network-Based Intrusion Detection System for CAN Bus
The Controller Area Network (CAN) bus serves as a standard protocol for
facilitating communication among various electronic control units (ECUs) within
contemporary vehicles. However, it has been demonstrated that the CAN bus is
susceptible to remote attacks, which pose risks to the vehicle's safety and
functionality. To tackle this concern, researchers have introduced intrusion
detection systems (IDSs) to identify and thwart such attacks. In this paper, we
present an innovative approach to intruder detection within the CAN bus,
leveraging Graph Convolutional Network (GCN) techniques as introduced by Zhang,
Tong, Xu, and Maciejewski in 2019. By harnessing the capabilities of deep
learning, we aim to enhance attack detection accuracy while minimizing the
requirement for manual feature engineering. Our experimental findings
substantiate that the proposed GCN-based method surpasses existing IDSs in
terms of accuracy, precision, and recall. Additionally, our approach
demonstrates efficacy in detecting mixed attacks, which are more challenging to
identify than single attacks. Furthermore, it reduces the necessity for
extensive feature engineering and is particularly well-suited for real-time
detection systems. To the best of our knowledge, this represents the pioneering
application of GCN to CAN data for intrusion detection. Our proposed approach
holds significant potential in fortifying the security and safety of modern
vehicles, safeguarding against attacks and preventing them from undermining
vehicle functionality
False Data Injection Attacks in Smart Grids: State of the Art and Way Forward
In the recent years cyberattacks to smart grids are becoming more frequent
Among the many malicious activities that can be launched against smart grids
False Data Injection FDI attacks have raised significant concerns from both
academia and industry FDI attacks can affect the internal state estimation
processcritical for smart grid monitoring and controlthus being able to bypass
conventional Bad Data Detection BDD methods Hence prompt detection and precise
localization of FDI attacks is becomming of paramount importance to ensure
smart grids security and safety Several papers recently started to study and
analyze this topic from different perspectives and address existing challenges
Datadriven techniques and mathematical modelings are the major ingredients of
the proposed approaches The primary objective of this work is to provide a
systematic review and insights into FDI attacks joint detection and
localization approaches considering that other surveys mainly concentrated on
the detection aspects without detailed coverage of localization aspects For
this purpose we select and inspect more than forty major research contributions
while conducting a detailed analysis of their methodology and objectives in
relation to the FDI attacks detection and localization We provide our key
findings of the identified papers according to different criteria such as
employed FDI attacks localization techniques utilized evaluation scenarios
investigated FDI attack types application scenarios adopted methodologies and
the use of additional data Finally we discuss open issues and future research
direction
A composable approach to design of newer techniques for large-scale denial-of-service attack attribution
Since its early days, the Internet has witnessed not only a phenomenal growth, but also a large number of security attacks, and in recent years, denial-of-service (DoS) attacks have emerged as one of the top threats. The stateless and destination-oriented Internet routing combined with the ability to harness a large number of compromised machines and the relative ease and low costs of launching such attacks has made this a hard problem to address. Additionally, the myriad requirements of scalability, incremental deployment, adequate user privacy protections, and appropriate economic incentives has further complicated the design of DDoS defense mechanisms. While the many research proposals to date have focussed differently on prevention, mitigation, or traceback of DDoS attacks, the lack of a comprehensive approach satisfying the different design criteria for successful attack attribution is indeed disturbing.
Our first contribution here has been the design of a composable data model that has helped us represent the various dimensions of the attack attribution problem, particularly the performance attributes of accuracy, effectiveness, speed and overhead, as orthogonal and mutually independent design considerations. We have then designed custom optimizations along each of these dimensions, and have further integrated them into a single composite model, to provide strong performance guarantees. Thus, the proposed model has given us a single framework that can not only address the individual shortcomings of the various known attack attribution techniques, but also provide a more wholesome counter-measure against DDoS attacks.
Our second contribution here has been a concrete implementation based on the proposed composable data model, having adopted a graph-theoretic approach to identify and subsequently stitch together individual edge fragments in the Internet graph to reveal the true routing path of any network data packet. The proposed approach has been analyzed through theoretical and experimental evaluation across multiple metrics, including scalability, incremental deployment, speed and efficiency of the distributed algorithm, and finally the total overhead associated with its deployment. We have thereby shown that it is realistically feasible to provide strong performance and scalability guarantees for Internet-wide attack attribution.
Our third contribution here has further advanced the state of the art by directly identifying individual path fragments in the Internet graph, having adopted a distributed divide-and-conquer approach employing simple recurrence relations as individual building blocks. A detailed analysis of the proposed approach on real-life Internet topologies with respect to network storage and traffic overhead, has provided a more realistic characterization. Thus, not only does the proposed approach lend well for simplified operations at scale but can also provide robust network-wide performance and security guarantees for Internet-wide attack attribution.
Our final contribution here has introduced the notion of anonymity in the overall attack attribution process to significantly broaden its scope. The highly invasive nature of wide-spread data gathering for network traceback continues to violate one of the key principles of Internet use today - the ability to stay anonymous and operate freely without retribution. In this regard, we have successfully reconciled these mutually divergent requirements to make it not only economically feasible and politically viable but also socially acceptable.
This work opens up several directions for future research - analysis of existing attack attribution techniques to identify further scope for improvements, incorporation of newer attributes into the design framework of the composable data model abstraction, and finally design of newer attack attribution techniques that comprehensively integrate the various attack prevention, mitigation and traceback techniques in an efficient manner
Resilience Strategies for Network Challenge Detection, Identification and Remediation
The enormous growth of the Internet and its use in everyday life make it an attractive target for malicious users. As the network becomes more complex and sophisticated it becomes more vulnerable to attack. There is a pressing need for the future internet to be resilient, manageable and secure. Our research is on distributed challenge detection and is part of the EU Resumenet Project (Resilience and Survivability for Future Networking: Framework, Mechanisms and Experimental Evaluation). It aims to make networks more resilient to a wide range of challenges including malicious attacks, misconfiguration, faults, and operational overloads. Resilience means the ability of the network to provide an acceptable level of service in the face of significant challenges; it is a superset of commonly used definitions for survivability, dependability, and fault tolerance. Our proposed resilience strategy could detect a challenge situation by identifying an occurrence and impact in real time, then initiating appropriate remedial action. Action is autonomously taken to continue operations as much as possible and to mitigate the damage, and allowing an acceptable level of service to be maintained. The contribution of our work is the ability to mitigate a challenge as early as possible and rapidly detect its root cause. Also our proposed multi-stage policy based challenge detection system identifies both the existing and unforeseen challenges. This has been studied and demonstrated with an unknown worm attack. Our multi stage approach reduces the computation complexity compared to the traditional single stage, where one particular managed object is responsible for all the functions. The approach we propose in this thesis has the flexibility, scalability, adaptability, reproducibility and extensibility needed to assist in the identification and remediation of many future network challenges
Towards Protection Against Low-Rate Distributed Denial of Service Attacks in Platform-as-a-Service Cloud Services
Nowadays, the variety of technology to perform daily tasks is abundant and different business
and people benefit from this diversity. The more technology evolves, more useful it gets and in
contrast, they also become target for malicious users. Cloud Computing is one of the technologies
that is being adopted by different companies worldwide throughout the years. Its popularity
is essentially due to its characteristics and the way it delivers its services. This Cloud expansion
also means that malicious users may try to exploit it, as the research studies presented throughout
this work revealed. According to these studies, Denial of Service attack is a type of threat
that is always trying to take advantage of Cloud Computing Services.
Several companies moved or are moving their services to hosted environments provided by Cloud
Service Providers and are using several applications based on those services. The literature on
the subject, bring to attention that because of this Cloud adoption expansion, the use of applications
increased. Therefore, DoS threats are aiming the Application Layer more and additionally,
advanced variations are being used such as Low-Rate Distributed Denial of Service attacks.
Some researches are being conducted specifically for the detection and mitigation of this kind
of threat and the significant problem found within this DDoS variant, is the difficulty to differentiate
malicious traffic from legitimate user traffic. The main goal of this attack is to exploit
the communication aspect of the HTTP protocol, sending legitimate traffic with small changes
to fill the requests of a server slowly, resulting in almost stopping the access of real users to
the server resources during the attack.
This kind of attack usually has a small time window duration but in order to be more efficient,
it is used within infected computers creating a network of attackers, transforming into
a Distributed attack. For this work, the idea to battle Low-Rate Distributed Denial of Service
attacks, is to integrate different technologies inside an Hybrid Application where the main goal
is to identify and separate malicious traffic from legitimate traffic. First, a study is done to
observe the behavior of each type of Low-Rate attack in order to gather specific information
related to their characteristics when the attack is executing in real-time. Then, using the Tshark
filters, the collection of those packet information is done. The next step is to develop combinations
of specific information obtained from the packet filtering and compare them. Finally,
each packet is analyzed based on these combinations patterns. A log file is created to store the
data gathered after the Entropy calculation in a friendly format.
In order to test the efficiency of the application, a Cloud virtual infrastructure was built using
OpenNebula Sandbox and Apache Web Server. Two tests were done against the infrastructure,
the first test had the objective to verify the effectiveness of the tool proportionally against the
Cloud environment created. Based on the results of this test, a second test was proposed to
demonstrate how the Hybrid Application works against the attacks performed. The conclusion
of the tests presented how the types of Slow-Rate DDoS can be disruptive and also exhibited
promising results of the Hybrid Application performance against Low-Rate Distributed Denial of
Service attacks. The Hybrid Application was successful in identify each type of Low-Rate DDoS,
separate the traffic and generate few false positives in the process. The results are displayed
in the form of parameters and graphs.Actualmente, a variedade de tecnologias que realizam tarefas diárias é abundante e diferentes
empresas e pessoas se beneficiam desta diversidade. Quanto mais a tecnologia evolui, mais
usual se torna, em contraposição, essas empresas acabam por se tornar alvo de actividades maliciosas.
Computação na Nuvem é uma das tecnologias que vem sendo adoptada por empresas
de diferentes segmentos ao redor do mundo durante anos. Sua popularidade se deve principalmente
devido as suas características e a maneira com o qual entrega seus serviços ao cliente.
Esta expansão da Computação na Nuvem também implica que usuários maliciosos podem tentar
explorá-la, como revela estudos de pesquisas apresentados ao longo deste trabalho. De acordo
também com estes estudos, Ataques de Negação de Serviço são um tipo de ameaça que sempre
estão a tentar tirar vantagens dos serviços de Computação na Nuvem.
Várias empresas moveram ou estão a mover seus serviços para ambientes hospedados fornecidos
por provedores de Computação na Nuvem e estão a utilizar várias aplicações baseadas nestes
serviços. A literatura existente sobre este tema chama atenção sobre o fato de que, por conta
desta expansão na adopção à serviços na Nuvem, o uso de aplicações aumentou. Portanto,
ameaças de Negação de Serviço estão visando mais a camada de aplicação e também, variações
de ataques mais avançados estão sendo utilizadas como Negação de Serviço Distribuída de Baixa
Taxa. Algumas pesquisas estão a ser feitas relacionadas especificamente para a detecção e mitigação
deste tipo de ameaça e o maior problema encontrado nesta variante é diferenciar tráfego
malicioso de tráfego legítimo. O objectivo principal desta ameaça é explorar a maneira como o
protocolo HTTP trabalha, enviando tráfego legítimo com pequenas modificações para preencher
as solicitações feitas a um servidor lentamente, tornando quase impossível para usuários legítimos
aceder os recursos do servidor durante o ataque.
Este tipo de ataque geralmente tem uma janela de tempo curta mas para obter melhor eficiência,
o ataque é propagado utilizando computadores infectados, criando uma rede de ataque,
transformando-se em um ataque distribuído. Para este trabalho, a ideia para combater Ataques
de Negação de Serviço Distribuída de Baixa Taxa é integrar diferentes tecnologias dentro de uma
Aplicação Híbrida com o objectivo principal de identificar e separar tráfego malicioso de tráfego
legítimo. Primeiro, um estudo é feito para observar o comportamento de cada tipo de Ataque
de Baixa Taxa, a fim de recolher informações específicas relacionadas às suas características
quando o ataque é executado em tempo-real. Então, usando os filtros do programa Tshark, a
obtenção destas informações é feita. O próximo passo é criar combinações das informações específicas
obtidas dos pacotes e compará-las. Então finalmente, cada pacote é analisado baseado
nos padrões de combinações feitos. Um arquivo de registo é criado ao fim para armazenar os
dados recolhidos após o cálculo da Entropia em um formato amigável.
A fim de testar a eficiência da Aplicação Híbrida, uma infra-estrutura Cloud virtual foi construída
usando OpenNebula Sandbox e servidores Apache. Dois testes foram feitos contra a
infra-estrutura, o primeiro teste teve o objectivo de verificar a efectividade da ferramenta
proporcionalmente contra o ambiente de Nuvem criado. Baseado nos resultados deste teste,
um segundo teste foi proposto para verificar o funcionamento da Aplicação Híbrida contra os
ataques realizados. A conclusão dos testes mostrou como os tipos de Ataques de Negação de
Serviço Distribuída de Baixa Taxa podem ser disruptivos e também revelou resultados promissores relacionados ao desempenho da Aplicação Híbrida contra esta ameaça. A Aplicação Híbrida
obteve sucesso ao identificar cada tipo de Ataque de Negação de Serviço Distribuída de Baixa
Taxa, em separar o tráfego e gerou poucos falsos positivos durante o processo. Os resultados
são exibidos em forma de parâmetros e grafos
An Interactive Relaxation Approach for Anomaly Detection and Preventive Measures in Computer Networks
It is proposed to develop a framework of detecting and analyzing small and widespread changes in specific dynamic characteristics of several nodes. The characteristics are locally measured at each node in a large network of computers and analyzed using a computational paradigm known as the Relaxation technique. The goal is to be able to detect the onset of a worm or virus as it originates, spreads-out, attacks and disables the entire network. Currently, selective disabling of one or more features across an entire subnet, e.g. firewalls, provides limited security and keeps us from designing high performance net-centric systems. The most desirable response is to surgically disable one or more nodes, or to isolate one or more subnets.The proposed research seeks to model virus/worm propagation as a spatio-temporal process. Such models have been successfully applied in heat-flow and evidence or gestalt driven perception of images among others. In particular, we develop an iterative technique driven by the self-assessed dynamic status of each node in a network. The status of each node will be updated incrementally in concurrence with its connected neighbors to enable timely identification of compromised nodes and subnets. Several key insights used in image analysis of line-diagrams, through an iterative and relaxation-driven node labeling method, are explored to help develop this new framework
The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena
The Internet is the most complex system ever created in human history.
Therefore, its dynamics and traffic unsurprisingly take on a rich variety of
complex dynamics, self-organization, and other phenomena that have been
researched for years. This paper is a review of the complex dynamics of
Internet traffic. Departing from normal treatises, we will take a view from
both the network engineering and physics perspectives showing the strengths and
weaknesses as well as insights of both. In addition, many less covered
phenomena such as traffic oscillations, large-scale effects of worm traffic,
and comparisons of the Internet and biological models will be covered.Comment: 63 pages, 7 figures, 7 tables, submitted to Advances in Complex
System
Leveraging siamese networks for one-shot intrusion detection model
The use of supervised Machine Learning (ML) to enhance Intrusion Detection Systems (IDS) has been the subject of significant research. Supervised ML is based upon learning by example, demanding significant volumes of representative instances for effective training and the need to retrain the model for every unseen cyber-attack class. However, retraining the models in-situ renders the network susceptible to attacks owing to the time-window required to acquire a sufficient volume of data. Although anomaly detection systems provide a coarse-grained defence against unseen attacks, these approaches are significantly less accurate and suffer from high false-positive rates. Here, a complementary approach referred to as “One-Shot Learning”, whereby a limited number of examples of a new attack-class is used to identify a new attack-class (out of many) is detailed. The model grants a new cyber-attack classification opportunity for classes that were not seen during training without retraining. A Siamese Network is trained to differentiate between classes based on pairs similarities, rather than features, allowing to identify new and previously unseen attacks. The performance of a pre-trained model to classify new attack-classes based only on one example is evaluated using three mainstream IDS datasets; CICIDS2017, NSL-KDD, and KDD Cup’99. The results confirm the adaptability of the model in classifying unseen attacks and the trade-off between performance and the need for distinctive class representations.</p
- …