394 research outputs found
State of the Art Intrusion Detection System for Cloud Computing
The term Cloud computing is not new anymore in computing technology. This form of computing technology previously considered only as marketing term, but today Cloud computing not only provides innovative improvements in resource utilisation but it also creates a new opportunities in data protection mechanisms where the advancement of intrusion detection technologies are blooming rapidly. From the perspective of security, Cloud computing also introduces concerns about data protection and intrusion detection mechanism. This paper surveys, explores and informs researchers about the latest developed Cloud Intrusion Detection Systems by providing a comprehensive taxonomy and investigating possible solutions to detect intrusions in cloud computing systems. As a result, we provide a comprehensive review of Cloud Intrusion Detection System research, while highlighting the specific properties of Cloud Intrusion Detection System. We also present taxonomy on the key issues in Cloud Intrusion Detection System area and discuss the different approaches taken to solve the issues. We conclude the paper with a critical analysis of challenges that have not fully solved
On the subspace learning for network attack detection
Tese (doutorado)—Universidade de BrasÃlia, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2019.O custo com todos os tipos de ciberataques tem crescido nas organizações. A casa branca do
goveno norte americano estima que atividades cibernéticas maliciosas custaram em 2016 um
valor entre US109 bilhões para a economia norte americana. Recentemente, é
possÃvel observar um crescimento no número de ataques de negação de serviço, botnets,
invasões e ransomware.
A Accenture argumenta que 89% dos entrevistados em uma pesquisa acreditam que tecnologias
como inteligência artificial, aprendizagem de máquina e análise baseada em comportamentos,
são essenciais para a segurança das organizações. É possÃvel adotar abordagens semisupervisionada e não-supervisionadas para implementar análises baseadas em
comportamentos, que podem ser aplicadas na detecção de anomalias em tráfego de rede, sem a
ncessidade de dados de ataques para treinamento.
Esquemas de processamento de sinais têm sido aplicados na detecção de tráfegos maliciosos
em redes de computadores, através de abordagens não-supervisionadas que mostram ganhos
na detecção de ataques de rede e na detecção e anomalias.
A detecção de anomalias pode ser desafiadora em cenários de dados desbalanceados, que são
casos com raras ocorrências de anomalias em comparação com o número de eventos normais.
O desbalanceamento entre classes pode comprometer o desempenho de algoritmos traficionais
de classificação, através de um viés para a classe predominante, motivando o desenvolvimento
de algoritmos para detecção de anomalias em dados desbalanceados.
Alguns algoritmos amplamente utilizados na detecção de anomalias assumem que observações
legÃtimas seguem uma distribuição Gaussiana. Entretanto, esta suposição pode não ser
observada na análise de tráfego de rede, que tem suas variáveis usualmente caracterizadas por
distribuições assimétricas
ou de cauda pesada. Desta forma, algoritmos de detecção de anomalias têm atraÃdo pesquisas
para se tornarem mais discriminativos em distribuições assimétricas, como também para se
tornarem mais robustos à corrupção e capazes de lidar com problemas causados pelo
desbalanceamento de dados.
Como uma primeira contribuição, foi proposta a Autosimilaridade (Eigensimilarity em inglês), que
é uma abordagem baseada em conceitos de processamento de sinais com o objetivo de detectar
tráfego malicioso em redes de computadores. Foi avaliada a acurácia e o desempenho da
abordagem proposta através de cenários simulados e dos dados do DARPA 1998. Os
experimentos mostram que Autosimilaridade detecta os ataques synflood, fraggle e varredura de
portas com precisão, com detalhes e de uma forma automática e cega, i.e. em uma abordagem
não-supervisionada.
Considerando que a assimetria de distribuições de dados podem melhorar a detecção de
anomalias em dados desbalanceados e assimétricos, como no caso de tráfego de rede, foi
proposta a Análise Robusta de Componentes Principais baseada em Momentos (ARCP-m), que
é uma abordagem baseada em distâncias entre observações contaminadas e momentos
calculados a partir subespaços robustos aprendidos através da Análise Robusta de
Componentes Principais (ARCP), com o objetivo de detectar anomalias em dados assimétricos e
em tráfego de rede.
Foi avaliada a acurácia do ARCP-m para detecção de anomalias em dados simulados, com
distribuições assimétricas e de cauda pesada, como também para os dados do CTU-13. Os
experimentos comparam nossa proposta com algoritmos amplamente utilizados para detecção
de anomalias e mostra que a distância entre estimativas robustas e observações contaminadas
pode melhorar a detecção de anomalias em dados assimétricos e a detecção de ataques de
rede.
Adicionalmente, foi proposta uma arquitetura e abordagem para avaliar uma prova de conceito
da Autosimilaridade para a detecção de comportamentos maliciosos em aplicações móveis
corporativas. Neste sentido, foram propostos cenários, variáveis e abordagem para a análise de
ameaças, como também foi avaliado o tempo de processamento necessário para a execução do
Autosimilaridade em dispositivos móveis.The cost of all types of cyberattacks is increasing for global organizations. The Whitehouse of the
U.S. government estimates that malicious cyber activity cost the U.S. economy between US109 billion in 2016. Recently, it is possible to observe an increasing in numbers of
Denial of Service (DoS), botnets, malicious insider and ransomware attacks.
Accenture consulting argues that 89% of survey respondents believe breakthrough technologies,
like artificial intelligence, machine learning and user behavior analytics, are essential for securing
their organizations. To face adversarial models, novel network attacks and counter measures of
attackers to avoid detection, it is possible to adopt unsupervised or semi-supervised approaches
for network anomaly detection, by means of behavioral analysis, where known anomalies are not
necessaries for training models.
Signal processing schemes have been applied to detect malicious traffic in computer networks
through unsupervised approaches, showing advances in network traffic analysis, in network
attack detection, and in network intrusion detection systems.
Anomalies can be hard to identify and separate from normal data due to the rare occurrences of
anomalies in comparison to normal events. The imbalanced data can compromise the
performance of most standard learning algorithms, creating bias or unfair weight to learn from the
majority class and reducing detection capacity of anomalies that are characterized by the minority
class. Therefore, anomaly detection algorithms have to be highly discriminating, robust to
corruption and able to deal with the imbalanced data problem.
Some widely adopted algorithms for anomaly detection assume a Gaussian distributed data for
legitimate observations, however this assumption may not be observed in network traffic, which is
usually characterized by skewed and heavy-tailed distributions.
As a first important contribution, we propose the Eigensimilarity, which is an approach based on
signal processing concepts applied to detection of malicious traffic in computer networks. We
evaluate the accuracy and performance of the proposed framework applied to a simulated
scenario and to the DARPA 1998 data set. The performed experiments show that synflood,
fraggle and port scan attacks can be detected accurately by Eigensimilarity and with great detail,
in an automatic and blind fashion, i.e. in an unsupervised approach.
Considering that the skewness improves anomaly detection in imbalanced and skewed data,
such as network traffic, we propose the Moment-based Robust Principal Component Analysis (mRPCA) for network attack detection. The m-RPCA is a framework based on distances between
contaminated observations and moments computed from a robust subspace learned by Robust
Principal Component Analysis (RPCA), in order to detect anomalies from skewed data and
network traffic. We evaluate the accuracy of the m-RPCA for anomaly detection on simulated
data sets, with skewed and heavy-tailed distributions, and for the CTU-13 data set. The
Experimental evaluation compares our proposal to widely adopted algorithms for anomaly
detection and shows that the distance between robust estimates and contaminated observations
can improve the anomaly detection on skewed data and the network attack detection.
Moreover, we propose an architecture and approach to evaluate a proof of concept of
Eigensimilarity for malicious behavior detection on mobile applications, in order to detect possible
threats in offline corporate mobile client. We propose scenarios, features and approaches for
threat analysis by means of Eigensimilarity, and evaluate the processing time required for
Eigensimilarity execution in mobile devices
Security Challenges from Abuse of Cloud Service Threat
Cloud computing is an ever-growing technology that leverages dynamic and versatile provision of computational resources and services. In spite of countless benefits that cloud service has to offer, there is always a security concern for new threats and risks. The paper provides a useful introduction to the rising security issues of Abuse of cloud service threat, which has no standard security measures to mitigate its risks and vulnerabilities. The threat can result an unbearable system gridlock and can make cloud services unavailable or even complete shutdown. The study has identified the potential challenges, as BotNet, BotCloud, Shared Technology Vulnerability and Malicious Insiders, from Abuse of cloud service threat. It has further described the attacking methods, impacts and the reasons due to the identified challenges. The study has evaluated the current available solutions and proposed mitigating security controls for the security risks and challenges from Abuse of cloud services threat
Toward a real-time TCP SYN Flood DDoS mitigation using Adaptive Neuro-Fuzzy classifier and SDN Assistance in Fog Computing
The growth of the Internet of Things (IoT) has recently impacted our daily
lives in many ways. As a result, a massive volume of data is generated and
needs to be processed in a short period of time. Therefore, the combination of
computing models such as cloud computing is necessary. The main disadvantage of
the cloud platform is its high latency due to the centralized mainframe.
Fortunately, a distributed paradigm known as fog computing has emerged to
overcome this problem, offering cloud services with low latency and high-access
bandwidth to support many IoT application scenarios. However, Attacks against
fog servers can take many forms, such as Distributed Denial of Service (DDoS)
attacks that severely affect the reliability and availability of fog services.
To address these challenges, we propose mitigation of Fog computing-based SYN
Flood DDoS attacks using an Adaptive Neuro-Fuzzy Inference System (ANFIS) and
Software Defined Networking (SDN) Assistance (FASA). The simulation results
show that FASA system outperforms other algorithms in terms of accuracy,
precision, recall, and F1-score. This shows how crucial our system is for
detecting and mitigating TCP SYN floods DDoS attacks.Comment: 16 page
Comparative Analysis Based on Survey of DDOS Attacks’ Detection Techniques at Transport, Network, and Application Layers
Distributed Denial of Service (DDOS) is one of the most prevalent attacks and can be executed in diverse ways using various tools and codes. This makes it very difficult for the security researchers and engineers to come up with a rigorous and efficient security methodology. Even with thorough research, analysis, real time implementation, and application of the best mechanisms in test environments, there are various ways to exploit the smallest vulnerability within the system that gets overlooked while designing the defense mechanism. This paper presents a comprehensive survey of various methodologies implemented by researchers and engineers to detect DDOS attacks at network, transport, and application layers using comparative analysis. DDOS attacks are most prevalent on network, transport, and application layers justifying the need to focus on these three layers in the OSI model
INTRUSION PREDICTION SYSTEM FOR CLOUD COMPUTING AND NETWORK BASED SYSTEMS
Cloud computing offers cost effective computational and storage services with on-demand scalable capacities according to the customers’ needs. These properties encourage organisations and individuals to migrate from classical computing to cloud computing from different disciplines. Although cloud computing is a trendy technology that opens the horizons for many businesses, it is a new paradigm that exploits already existing computing technologies in new framework rather than being a novel technology. This means that cloud computing inherited classical computing problems that are still challenging. Cloud computing security is considered one of the major problems, which require strong security systems to protect the system, and the valuable data stored and processed in it. Intrusion detection systems are one of the important security components and defence layer that detect cyber-attacks and malicious activities in cloud and non-cloud environments. However, there are some limitations such as attacks were detected at the time that the damage of the attack was already done. In recent years, cyber-attacks have increased rapidly in volume and diversity. In 2013, for example, over 552 million customers’ identities and crucial information were revealed through data breaches worldwide [3]. These growing threats are further demonstrated in the 50,000 daily attacks on the London Stock Exchange [4]. It has been predicted that the economic impact of cyber-attacks will cost the global economy $3 trillion on aggregate by 2020 [5]. This thesis focused on proposing an Intrusion Prediction System that is capable of sensing an attack before it happens in cloud or non-cloud environments. The proposed solution is based on assessing the host system vulnerabilities and monitoring the network traffic for attacks preparations. It has three main modules. The monitoring module observes the network for any intrusion preparations. This thesis proposes a new dynamic-selective statistical algorithm for detecting scan activities, which is part of reconnaissance that represents an essential step in network attack preparation. The proposed method performs a statistical selective analysis for network traffic searching for an attack or intrusion indications. This is achieved by exploring and applying different statistical and probabilistic methods that deal with scan detection. The second module of the prediction system is vulnerabilities assessment that evaluates the weaknesses and faults of the system and measures the probability of the system to fall victim to cyber-attack. Finally, the third module is the prediction module that combines the output of the two modules and performs risk assessments of the system security from intrusions prediction. The results of the conducted experiments showed that the suggested system outperforms the analogous methods in regards to performance of network scan detection, which means accordingly a significant improvement to the security of the targeted system. The scanning detection algorithm has achieved high detection accuracy with 0% false negative and 50% false positive. In term of performance, the detection algorithm consumed only 23% of the data needed for analysis compared to the best performed rival detection method
Information fusion architectures for security and resource management in cyber physical systems
Data acquisition through sensors is very crucial in determining the operability of the observed physical entity. Cyber Physical Systems (CPSs) are an example of distributed systems where sensors embedded into the physical system are used in sensing and data acquisition. CPSs are a collaboration between the physical and the computational cyber components. The control decisions sent back to the actuators on the physical components from the computational cyber components closes the feedback loop of the CPS. Since, this feedback is solely based on the data collected through the embedded sensors, information acquisition from the data plays an extremely vital role in determining the operational stability of the CPS. Data collection process may be hindered by disturbances such as system faults, noise and security attacks. Hence, simple data acquisition techniques will not suffice as accurate system representation cannot be obtained. Therefore, more powerful methods of inferring information from collected data such as Information Fusion have to be used.
Information fusion is analogous to the cognitive process used by humans to integrate data continuously from their senses to make inferences about their environment. Data from the sensors is combined using techniques drawn from several disciplines such as Adaptive Filtering, Machine Learning and Pattern Recognition. Decisions made from such combination of data form the crux of information fusion and differentiates it from a flat structured data aggregation. In this dissertation, multi-layered information fusion models are used to develop automated decision making architectures to service security and resource management requirements in Cyber Physical Systems --Abstract, page iv
- …