147 research outputs found
Towards a Reliable Comparison and Evaluation of Network Intrusion Detection Systems Based on Machine Learning Approaches
Presently, we are living in a hyper-connected world where millions of heterogeneous devices are continuously sharing information in different application contexts for wellness, improving communications, digital businesses, etc. However, the bigger the number of devices and connections are, the higher the risk of security threats in this scenario. To counteract against malicious behaviours and preserve essential security services, Network Intrusion Detection Systems (NIDSs) are the most widely used defence line in communications networks. Nevertheless, there is no standard methodology to evaluate and fairly compare NIDSs. Most of the proposals elude mentioning crucial steps regarding NIDSs validation that make their comparison hard or even impossible. This work firstly includes a comprehensive study of recent NIDSs based on machine learning approaches, concluding that almost all of them do not accomplish with what authors of this paper consider mandatory steps for a reliable comparison and evaluation of NIDSs. Secondly, a structured methodology is proposed and assessed on the UGR'16 dataset to test its suitability for addressing network attack detection problems. The guideline and steps recommended will definitively help the research community to fairly assess NIDSs, although the definitive framework is not a trivial task and, therefore, some extra effort should still be made to improve its understandability and usability further
OXIDATION OF SILICON - THE VLSI GATE DIELECTRIC
Silicon dominates the semiconductor industry for good reasons. One factor is the stable, easily formed, insulating oxide, which aids high performance and allows practical processing. How well can these virtues survive as new demands are made on integrity, on smallness of feature sizes and other dimensions, and on constraints on processing and manufacturing methods? These demands make it critical to identify, quantify and predict the key controlling growth and defect processes on an atomic scale.The combination of theory and novel experiments (isotope methods, electronic noise, spin resonance, pulsed laser atom probes and other desorption methods, and especially scanning tunnelling or atomic force microscopies) provide tools whose impact on models is just being appreciated. We discuss the current unified model for silicon oxidation, which goes beyond the traditional descriptions of kinetic and ellipsometric data by explicitly addressing the issues raised in isotope experiments. The framework is still the Deal-Grove model, which provides a phenomenology to describe the major regimes of behaviour, and gives a base from which the substantial deviations can be characterized. In this model, growth is limited by diffusion and interfacial reactions operating in series. The deviations from Deal-Grove are most significant for just those first tens of atomic layers of oxide which are critical for the ultra-thin oxide layers now demanded. Several features emerge as important. First is the role of stress and stress relaxation. Second is the nature of the oxide closest to the Si, both its defects and its differences from the amorphous stoichiometric oxide further out, whether in composition, in network topology, or otherwise. Thirdly, we must consider the charge states of both fixed and mobile species. In thin films with very different dielectric constants, image terms can be important; these terms affect interpretation of spectroscopies, the injection of oxidant species and relative defect stabilities. This has added importance now that P-b concentrations have been correlated with interfacial stress. This raises further issues about the perfection of the oxide random network and the incorporation of interstitial species like molecular oxygen.Finally, the roles of contamination, particles, metals, hydrocarbons etc are important, as is interface roughness. These features depend on pre-gate oxide cleaning and define the Si surface that is to be oxidized which may have an influence on the features listed above
Towards Protection Against Low-Rate Distributed Denial of Service Attacks in Platform-as-a-Service Cloud Services
Nowadays, the variety of technology to perform daily tasks is abundant and different business
and people benefit from this diversity. The more technology evolves, more useful it gets and in
contrast, they also become target for malicious users. Cloud Computing is one of the technologies
that is being adopted by different companies worldwide throughout the years. Its popularity
is essentially due to its characteristics and the way it delivers its services. This Cloud expansion
also means that malicious users may try to exploit it, as the research studies presented throughout
this work revealed. According to these studies, Denial of Service attack is a type of threat
that is always trying to take advantage of Cloud Computing Services.
Several companies moved or are moving their services to hosted environments provided by Cloud
Service Providers and are using several applications based on those services. The literature on
the subject, bring to attention that because of this Cloud adoption expansion, the use of applications
increased. Therefore, DoS threats are aiming the Application Layer more and additionally,
advanced variations are being used such as Low-Rate Distributed Denial of Service attacks.
Some researches are being conducted specifically for the detection and mitigation of this kind
of threat and the significant problem found within this DDoS variant, is the difficulty to differentiate
malicious traffic from legitimate user traffic. The main goal of this attack is to exploit
the communication aspect of the HTTP protocol, sending legitimate traffic with small changes
to fill the requests of a server slowly, resulting in almost stopping the access of real users to
the server resources during the attack.
This kind of attack usually has a small time window duration but in order to be more efficient,
it is used within infected computers creating a network of attackers, transforming into
a Distributed attack. For this work, the idea to battle Low-Rate Distributed Denial of Service
attacks, is to integrate different technologies inside an Hybrid Application where the main goal
is to identify and separate malicious traffic from legitimate traffic. First, a study is done to
observe the behavior of each type of Low-Rate attack in order to gather specific information
related to their characteristics when the attack is executing in real-time. Then, using the Tshark
filters, the collection of those packet information is done. The next step is to develop combinations
of specific information obtained from the packet filtering and compare them. Finally,
each packet is analyzed based on these combinations patterns. A log file is created to store the
data gathered after the Entropy calculation in a friendly format.
In order to test the efficiency of the application, a Cloud virtual infrastructure was built using
OpenNebula Sandbox and Apache Web Server. Two tests were done against the infrastructure,
the first test had the objective to verify the effectiveness of the tool proportionally against the
Cloud environment created. Based on the results of this test, a second test was proposed to
demonstrate how the Hybrid Application works against the attacks performed. The conclusion
of the tests presented how the types of Slow-Rate DDoS can be disruptive and also exhibited
promising results of the Hybrid Application performance against Low-Rate Distributed Denial of
Service attacks. The Hybrid Application was successful in identify each type of Low-Rate DDoS,
separate the traffic and generate few false positives in the process. The results are displayed
in the form of parameters and graphs.Actualmente, a variedade de tecnologias que realizam tarefas diárias é abundante e diferentes
empresas e pessoas se beneficiam desta diversidade. Quanto mais a tecnologia evolui, mais
usual se torna, em contraposição, essas empresas acabam por se tornar alvo de actividades maliciosas.
Computação na Nuvem é uma das tecnologias que vem sendo adoptada por empresas
de diferentes segmentos ao redor do mundo durante anos. Sua popularidade se deve principalmente
devido as suas características e a maneira com o qual entrega seus serviços ao cliente.
Esta expansão da Computação na Nuvem também implica que usuários maliciosos podem tentar
explorá-la, como revela estudos de pesquisas apresentados ao longo deste trabalho. De acordo
também com estes estudos, Ataques de Negação de Serviço são um tipo de ameaça que sempre
estão a tentar tirar vantagens dos serviços de Computação na Nuvem.
Várias empresas moveram ou estão a mover seus serviços para ambientes hospedados fornecidos
por provedores de Computação na Nuvem e estão a utilizar várias aplicações baseadas nestes
serviços. A literatura existente sobre este tema chama atenção sobre o fato de que, por conta
desta expansão na adopção à serviços na Nuvem, o uso de aplicações aumentou. Portanto,
ameaças de Negação de Serviço estão visando mais a camada de aplicação e também, variações
de ataques mais avançados estão sendo utilizadas como Negação de Serviço Distribuída de Baixa
Taxa. Algumas pesquisas estão a ser feitas relacionadas especificamente para a detecção e mitigação
deste tipo de ameaça e o maior problema encontrado nesta variante é diferenciar tráfego
malicioso de tráfego legítimo. O objectivo principal desta ameaça é explorar a maneira como o
protocolo HTTP trabalha, enviando tráfego legítimo com pequenas modificações para preencher
as solicitações feitas a um servidor lentamente, tornando quase impossível para usuários legítimos
aceder os recursos do servidor durante o ataque.
Este tipo de ataque geralmente tem uma janela de tempo curta mas para obter melhor eficiência,
o ataque é propagado utilizando computadores infectados, criando uma rede de ataque,
transformando-se em um ataque distribuído. Para este trabalho, a ideia para combater Ataques
de Negação de Serviço Distribuída de Baixa Taxa é integrar diferentes tecnologias dentro de uma
Aplicação Híbrida com o objectivo principal de identificar e separar tráfego malicioso de tráfego
legítimo. Primeiro, um estudo é feito para observar o comportamento de cada tipo de Ataque
de Baixa Taxa, a fim de recolher informações específicas relacionadas às suas características
quando o ataque é executado em tempo-real. Então, usando os filtros do programa Tshark, a
obtenção destas informações é feita. O próximo passo é criar combinações das informações específicas
obtidas dos pacotes e compará-las. Então finalmente, cada pacote é analisado baseado
nos padrões de combinações feitos. Um arquivo de registo é criado ao fim para armazenar os
dados recolhidos após o cálculo da Entropia em um formato amigável.
A fim de testar a eficiência da Aplicação Híbrida, uma infra-estrutura Cloud virtual foi construída
usando OpenNebula Sandbox e servidores Apache. Dois testes foram feitos contra a
infra-estrutura, o primeiro teste teve o objectivo de verificar a efectividade da ferramenta
proporcionalmente contra o ambiente de Nuvem criado. Baseado nos resultados deste teste,
um segundo teste foi proposto para verificar o funcionamento da Aplicação Híbrida contra os
ataques realizados. A conclusão dos testes mostrou como os tipos de Ataques de Negação de
Serviço Distribuída de Baixa Taxa podem ser disruptivos e também revelou resultados promissores relacionados ao desempenho da Aplicação Híbrida contra esta ameaça. A Aplicação Híbrida
obteve sucesso ao identificar cada tipo de Ataque de Negação de Serviço Distribuída de Baixa
Taxa, em separar o tráfego e gerou poucos falsos positivos durante o processo. Os resultados
são exibidos em forma de parâmetros e grafos
Accurately Identifying New QoS Violation Driven by High-Distributed Low-Rate Denial of Service Attacks Based on Multiple Observed Features
We propose using multiple observed features of network traffic to identify new high-distributed low-rate quality of services (QoS) violation so that detection accuracy may be further improved. For the multiple observed features, we choose F feature in TCP packet header as a microscopic feature and, P feature and D feature of network traffic as macroscopic features. Based on these features, we establish multistream fused hidden Markov model (MF-HMM) to detect stealthy low-rate denial of service (LDoS) attacks hidden in legitimate network background traffic. In addition, the threshold value is dynamically adjusted by using Kaufman algorithm. Our experiments show that the additive effect of combining multiple features effectively reduces the false-positive rate. The average detection rate of MF-HMM results in a significant 23.39% and 44.64% improvement over typical power spectrum density (PSD) algorithm and nonparametric cumulative sum (CUSUM) algorithm
Detection of LDDoS Attacks Based on TCP Connection Parameters
Low-rate application layer distributed denial of service (LDDoS) attacks are
both powerful and stealthy. They force vulnerable webservers to open all
available connections to the adversary, denying resources to real users.
Mitigation advice focuses on solutions that potentially degrade quality of
service for legitimate connections. Furthermore, without accurate detection
mechanisms, distributed attacks can bypass these defences. A methodology for
detection of LDDoS attacks, based on characteristics of malicious TCP flows, is
proposed within this paper. Research will be conducted using combinations of
two datasets: one generated from a simulated network, the other from the
publically available CIC DoS dataset. Both contain the attacks slowread,
slowheaders and slowbody, alongside legitimate web browsing. TCP flow features
are extracted from all connections. Experimentation was carried out using six
supervised AI algorithms to categorise attack from legitimate flows. Decision
trees and k-NN accurately classified up to 99.99% of flows, with exceptionally
low false positive and false negative rates, demonstrating the potential of AI
in LDDoS detection
Recommended from our members
New Data Protection Abstractions for Emerging Mobile and Big Data Workloads
Two recent shifts in computing are challenging the effectiveness of traditional approaches to data protection. Emerging machine learning workloads have complex access patterns and unique leakage characteristics that are not well supported by existing protection approaches. Second, mobile operating systems do not provide sufficient support for fine grained data protection tools forcing users to rely on individual applications to correctly manage and protect data. My thesis is that these emerging workloads have unique characteristics that we can leverage to build new, more effective data protection abstractions.
This dissertation presents two new data protection systems for machine learning work-loads and a new system for fine grained data management and protection on mobile devices. First is Sage, a differentially private machine learning platform addressing the two primary challenges of differential privacy: running out of budget and the privacy utility tradeoff. The second system, Pyramid, is the first selective data system. Pyramid leverages count featurization to reduce the amount of data exposed while training classification models by two orders of magnitude. The final system, Pebbles, provides users with logical data objects as a new fine grained data management and protection primitive allowing data management at a higher level of abstraction. Pebbles, leverages high level storage abstractions in mobile operating systems to discover user recognizable application level data objects in unmodified mobile applications
Bacterial nanocellulose and long-chain fatty acids interaction: an in silico study
Chronic wounds are a big challenge in contemporary society, as they lead to a decrease in life-quality, amputations and even death. Infections and biofilm formation might occur with chronic wounds, due to the higher susceptibility to antibiotic multi-resistant bacteria. In this situation, novel wound dressing biomaterials are needed for treatment. Thus, the aim of this research was to evaluate a possible BNC interaction with tucumã oil/butter-derived fatty acids, as this system could be a promising biomaterial for wound treating. The interaction between cellobiose (BNC basic unit) and four fatty acids was evaluated by ab initio simulations and density functional theory (DFT), through SIESTA code. Molecular docking was also used to investigate the effect of a possible releasing of the studied fatty acids to the quorum-sensing proteins of Pseudomonas aeruginosa (gram-negative bacterium) and Staphylococcus aureus (gram-positive bacterium). According to ab initio simulations, the interaction between cellobiose and fatty acids derived from tucumã oil/butter was suggested due to physical adsorption (energy around 0.17-1.33 eV) of the lipidic structures into cellobiose. A great binding affinity (∆G ranging from 4.2-8.2 kcal.mol-1) was observed for both protonated and deprotonated fatty acids against P. aeruginosa (LasI, LasA and Rhlr) and S. aureus (ArgA and ArgC) quorum-sensing proteins, indicating that these bioactive compounds might act as potential antimicrobial and/or antibiofilm agents in the proposed system. Hence, from a theoretical viewpoint, the proposed system could be a promising raw biomaterial in the production of chronic wound dressings
An intelligent system to detect slow denial of service attacks in software-defined networks
Slow denial of service attack (DoS) is a tricky issue in software-defined network (SDN) as it uses less bandwidth to attack a server. In this paper, a slow-rate DoS attack called Slowloris is detected and mitigated on Apache2 and Nginx servers using a methodology called an intelligent system for slow DoS detection using machine learning (ISSDM) in SDN. Data generation module of ISSDM generates dataset with response time, the number of connections, timeout, and pattern match as features. Data are generated in a real environment using Apache2, Nginx server, Zodiac FX OpenFlow switch and Ryu controller. Monte Carlo simulation is used to estimate threshold values for attack classification. Further, ISSDM performs header inspection using regular expressions to mark flows as legitimate or attacked during data generation. The proposed feature selection module of ISSDM, called blended statistical and information gain (BSIG), selects those features that contribute best to classification. These features are used for classification by various machine learning and deep learning models. Results are compared with feature selection methods like Chi-square, T-test, and information gain
A Novel Hybrid Spotted Hyena-Swarm Optimization (HS-FFO) Framework for Effective Feature Selection in IOT Based Cloud Security Data
Internet of Things (IoT) has gained its major insight in terms of its deployment and applications. Since IoT exhibits more heterogeneous characteristics in transmitting the real time application data, these data are vulnerable to many security threats. To safeguard the data, machine and deep learning based security systems has been proposed. But this system suffers the computational burden that impedes threat detection capability. Hence the feature selection plays an important role in designing the complexity aware IoT systems to defend the security attacks in the system. This paper propose the novel ensemble of spotted hyena with firefly algorithm to choose the best features and minimise the redundant data features that can boost the detection system's computational effectiveness. Firstly, an effective firefly optimized feature correlation method is developed. Then, in order to enhance the exploration and search path, operators of fireflies are combined with Spotted Hyena to assist the swarms in leaving the regionally best solutions. The experimentation has been carried out using the different IoT cloud security datasets such as NSL-KDD-99 , UNSW and CIDCC -001 datasets and contrasted with ten cutting-edge feature extraction techniques, like PSO (particle swarm optimization), BAT, Firefly, ACO(Ant Colony Optimization), Improved PSO, CAT, RAT, Spotted Hyena, SHO and BOC(Bee-Colony Optimization) algorithms. Results demonstrates the proposed hybrid model has achieved the better feature selection mechanism with less convergence time and aids better for intelligent threat detection system with the high performance of detection
- …