56 research outputs found

    A comparative experimental design and performance analysis of Snort-based Intrusion Detection System in practical computer networks

    Get PDF
    As one of the most reliable technologies, network intrusion detection system (NIDS) allows the monitoring of incoming and outgoing traffic to identify unauthorised usage and mishandling of attackers in computer network systems. To this extent, this paper investigates the experimental performance of Snort-based NIDS (S-NIDS) in a practical network with the latest technology in various network scenarios including high data speed and/or heavy traffic and/or large packet size. An effective testbed is designed based on Snort using different muti-core processors, e.g., i5 and i7, with different operating systems, e.g., Windows 7, Windows Server and Linux. Furthermore, considering an enterprise network consisting of multiple virtual local area networks (VLANs), a centralised parallel S-NIDS (CPS-NIDS) is proposed with the support of a centralised database server to deal with high data speed and heavy traffic. Experimental evaluation is carried out for each network configuration to evaluate the performance of the S-NIDS in different network scenarios as well as validating the effectiveness of the proposed CPS-NIDS. In particular, by analysing packet analysis efficiency, an improved performance of up to 10% is shown to be achieved with Linux over other operating systems, while up to 8% of improved performance can be achieved with i7 over i5 processors

    A comparative experimental design and performance analysis of Snort-based Intrusion Detection System in practical computer networks

    Get PDF
    As one of the most reliable technologies, network intrusion detection system (NIDS) allows the monitoring of incoming and outgoing traffic to identify unauthorised usage and mishandling of attackers in computer network systems. To this extent, this paper investigates the experimental performance of Snort-based NIDS (S-NIDS) in a practical network with the latest technology in various network scenarios including high data speed and/or heavy traffic and/or large packet size. An effective testbed is designed based on Snort using different muti-core processors, e.g., i5 and i7, with different operating systems, e.g., Windows 7, Windows Server and Linux. Furthermore, considering an enterprise network consisting of multiple virtual local area networks (VLANs), a centralised parallel S-NIDS (CPS-NIDS) is proposed with the support of a centralised database server to deal with high data speed and heavy traffic. Experimental evaluation is carried out for each network configuration to evaluate the performance of the S-NIDS in different network scenarios as well as validating the effectiveness of the proposed CPS-NIDS. In particular, by analysing packet analysis efficiency, an improved performance of up to 10% is shown to be achieved with Linux over other operating systems, while up to 8% of improved performance can be achieved with i7 over i5 processors

    Hardware-Aware Algorithm Designs for Efficient Parallel and Distributed Processing

    Get PDF
    The introduction and widespread adoption of the Internet of Things, together with emerging new industrial applications, bring new requirements in data processing. Specifically, the need for timely processing of data that arrives at high rates creates a challenge for the traditional cloud computing paradigm, where data collected at various sources is sent to the cloud for processing. As an approach to this challenge, processing algorithms and infrastructure are distributed from the cloud to multiple tiers of computing, closer to the sources of data. This creates a wide range of devices for algorithms to be deployed on and software designs to adapt to.In this thesis, we investigate how hardware-aware algorithm designs on a variety of platforms lead to algorithm implementations that efficiently utilize the underlying resources. We design, implement and evaluate new techniques for representative applications that involve the whole spectrum of devices, from resource-constrained sensors in the field, to highly parallel servers. At each tier of processing capability, we identify key architectural features that are relevant for applications and propose designs that make use of these features to achieve high-rate, timely and energy-efficient processing.In the first part of the thesis, we focus on high-end servers and utilize two main approaches to achieve high throughput processing: vectorization and thread parallelism. We employ vectorization for the case of pattern matching algorithms used in security applications. We show that re-thinking the design of algorithms to better utilize the resources available in the platforms they are deployed on, such as vector processing units, can bring significant speedups in processing throughout. We then show how thread-aware data distribution and proper inter-thread synchronization allow scalability, especially for the problem of high-rate network traffic monitoring. We design a parallelization scheme for sketch-based algorithms that summarize traffic information, which allows them to handle incoming data at high rates and be able to answer queries on that data efficiently, without overheads.In the second part of the thesis, we target the intermediate tier of computing devices and focus on the typical examples of hardware that is found there. We show how single-board computers with embedded accelerators can be used to handle the computationally heavy part of applications and showcase it specifically for pattern matching for security-related processing. We further identify key hardware features that affect the performance of pattern matching algorithms on such devices, present a co-evaluation framework to compare algorithms, and design a new algorithm that efficiently utilizes the hardware features.In the last part of the thesis, we shift the focus to the low-power, resource-constrained tier of processing devices. We target wireless sensor networks and study distributed data processing algorithms where the processing happens on the same devices that generate the data. Specifically, we focus on a continuous monitoring algorithm (geometric monitoring) that aims to minimize communication between nodes. By deploying that algorithm in action, under realistic environments, we demonstrate that the interplay between the network protocol and the application plays an important role in this layer of devices. Based on that observation, we co-design a continuous monitoring application with a modern network stack and augment it further with an in-network aggregation technique. In this way, we show that awareness of the underlying network stack is important to realize the full potential of the continuous monitoring algorithm.The techniques and solutions presented in this thesis contribute to better utilization of hardware characteristics, across a wide spectrum of platforms. We employ these techniques on problems that are representative examples of current and upcoming applications and contribute with an outlook of emerging possibilities that can build on the results of the thesis

    ESTIMA: Extrapolating ScalabiliTy of In-Memory Applications

    Get PDF
    This paper presents ESTIMA, an easy-to-use tool for extrapolating the scalability of in-memory applications. ESTIMA is designed to perform a simple, yet important task: given the performance of an application on a small machine with a handful of cores, ESTIMA extrapolates its scalability to a larger machine with more cores, while requiring minimum input from the user. The key idea underlying ESTIMA is the use of stalled cycles (e.g. cycles that the processor spends waiting for various events, such as cache misses or waiting on a lock). ESTIMA measures stalled cycles on a few cores and extrapolates them to more cores, estimating the amount of waiting in the system. ESTIMA can be effectively used to predict the scalability of in-memory applications. For instance, using measurements of memcached and SQLite on a desktop machine, we obtain accurate predictions of their scalability on a server. Our extensive evaluation on a large number of in-memory benchmarks shows that ESTIMA has generally low prediction errors

    Exploring Host-based Software Defined Networking and its Applications

    Get PDF
    Network operators need detailed understanding of their networks in order to ensure functionality and to mitigate security risks. Unfortunately, legacy networks are poorly suited to providing this understanding. While the software-defined networking paradigm has the potential to, existing switch-based implementations are unable to scale sufficiently to provide information in a fine-grained. Furthermore, as switches are inherently blind to the inner workings of hosts, significantly hindering an operator\u27s ability to understand the true context behind network traffic. In this work, we explore a host-based software-defined networking implementation. We evaluation our implementation, showing that it is able to scale beyond the capabilities of a switch-based implementation. Furthermore, we discuss various detailed network policies that network operators can write and enforce which are impossible in a switch-based implementation. We also implement and discuss an anti-reconnaissance system that can be deployed without any additional components

    Behaviour modelling with data obtained from the Internet and contributions to cluster validation

    Get PDF
    [EN]This PhD thesis makes contributions in modelling behaviours found in different types of data acquired from the Internet and in the field of clustering evaluation. Two different types of Internet data were processed, on the one hand, internet traffic with the objective of attack detection and on the other hand, web surfing activity with the objective of web personalization, both data being of sequential nature. To this aim, machine learning techniques were applied, mostly unsupervised techniques. Moreover, contributions were made in cluster evaluation, in order to make easier the selection of the best partition in clustering problems. With regard to network attack detection, first, gureKDDCup database was generated which adds payload data to KDDCup99 connection attributes because it is essential to detect non-flood attacks. Then, by modelling this data a network Intrusion Detection System (nIDS) was proposed where context-independent payload processing was done obtaining satisfying detection rates. In the web mining context web surfing activity was modelled for web personalization. In this context, generic and non-invasive systems to extract knowledge were proposed just using the information stored in webserver log files. Contributions were done in two senses: in problem detection and in link suggestion. In the first application a meaningful list of navigation attributes was proposed for each user session to group and detect different navigation profiles. In the latter, a general and non-invasive link suggestion system was proposed which was evaluated with satisfactory results in a link prediction context. With regard to the analysis of Cluster Validity Indices (CVI), the most extensive CVI comparison found up to a moment was carried out using a partition similarity measure based evaluation methodology. Moreover, we analysed the behaviour of CVIs in a real web mining application with elevated number of clusters in which they tend to be unstable. We proposed a procedure which automatically selects the best partition analysing the slope of different CVI values.[EU]Doktorego-tesi honek internetetik eskuratutako datu mota ezberdinetan aurkitutako portaeren modelugintzan eta multzokatzeen ebaluazioan egiten ditu bere ekarpenak. Zehazki, bi mota ezberdinetako interneteko datuak prozesatu dira: batetik, interneteko trafikoa, erasoak hautemateko helburuarekin; eta bestetik, web nabigazioen jarduera, weba pertsonalizatzeko helburuarekin; bi datu motak izaera sekuentzialekoak direlarik. Helburu hauek lortzeko, ikasketa automatikoko teknikak aplikatu dira, nagusiki gainbegiratu-gabeko teknikak. Testuinguru honetan, multzokatzeen partizio onenaren aukeraketak dakartzan arazoak gutxitzeko multzokatzeen ebaluazioan ere ekarpenak egin dira. Sareko erasoen hautemateari dagokionez, lehenik gureKDDCup datubasea eratu da KDDCup99-ko konexio atributuei payload-ak (sareko paketeen datu eremuak) gehituz, izan ere, ez-flood erasoak (pakete gutxi erabiltzen dituzten erasoak) hautemateko ezinbestekoak baitira. Ondoren, datu hauek modelatuz testuinguruarekiko independenteak diren payload prozesaketak oinarri dituen sareko erasoak hautemateko sistema (network Intrusion Detection System (nIDS)) bat proposatu da maila oneko eraso hautemate-tasak lortuz. Web meatzaritzaren testuinguruan, weba pertsonalizatzeko helburuarekin web nabigazioen jarduera modelatu da. Honetarako, web zerbizarietako lorratz fitxategietan metatutako informazioa soilik erabiliz ezagutza erabilgarria erauziko duen sistema orokor eta ez-inbasiboak proposatu dira. Ekarpenak bi zentzutan eginaz: arazoen hautematean eta esteken iradokitzean. Lehen aplikazioan sesioen nabigazioa adierazteko atributu esanguratsuen zerrenda bat proposatu da, gero nabigazioak multzokatu eta nabigazio profil ezberdinak hautemateko. Bigarren aplikazioan, estekak iradokitzeko sistema orokor eta ez-inbasibo bat proposatu da, eta berau, estekak aurresateko testuinguruan ebaluatu da emaitza onak lortuz. Multzokatzeak balioztatzeko indizeen (Cluster Validity Indices (CVI)) azterketari dagokionez, gaurdaino aurkitu den CVI-en konparaketa zabalena burutu da partizioen antzekotasun neurrian oinarritutako ebaluazio metodologia erabiliz. Gainera, CVI-en portaera aztertu da egiazko web meatzaritza aplikazio batean normalean baino multzo kopuru handiagoak dituena, non CVI-ek ezegonkorrak izateko joera baitute. Arazo honi aurre eginaz, CVI ezberdinek partizio ezberdinetarako lortzen dituzten balioen maldak aztertuz automatikoki partiziorik onena hautatzen duen prozedura proposatu da.[ES]Esta tesis doctoral hace contribuciones en el modelado de comportamientos encontrados en diferentes tipos de datos adquiridos desde internet y en el campo de la evaluación del clustering. Dos tipos de datos de internet han sido procesados: en primer lugar el tráfico de internet con el objetivo de detectar ataques; y en segundo lugar la actividad generada por los usuarios web con el objetivo de personalizar la web; siendo los dos tipos de datos de naturaleza secuencial. Para este fin, se han aplicado técnicas de aprendizaje automático, principalmente técnicas no-supervisadas. Además, se han hecho aportaciones en la evaluación de particiones de clusters para facilitar la selección de la mejor partición de clusters. Respecto a la detección de ataques en la red, primero, se generó la base de datos gureKDDCup que añade el payload (la parte de contenido de los paquetes de la red) a los atributos de la conexión de KDDCup99 porque el payload es esencial para la detección de ataques no-flood (ataques que utilizan pocos paquetes). Después, se propuso un sistema de detección de intrusos (network Intrusion Detection System (IDS)) modelando los datos de gureKDDCup donde se propusieron varios preprocesos del payload independientes del contexto obteniendo resultados satisfactorios. En el contexto de la minerı́a web, se ha modelado la actividad de la navegación web para la personalización web. En este contexto se propondrán sistemas genéricos y no-invasivos para la extracción del conocimiento, utilizando únicamente la información almacenada en los ficheros log de los servidores web. Se han hecho aportaciones en dos sentidos: en la detección de problemas y en la sugerencia de links. En la primera aplicación, se propuso una lista de atributos significativos para representar las sesiones de navegación web para después agruparlos y detectar diferentes perfiles de navegación. En la segunda aplicación, se propuso un sistema general y no-invasivo para sugerir links y se evaluó en el contexto de predicción de links con resultados satisfactorios. Respecto al análisis de ı́ndices de validación de clusters (Cluster Validity Indices (CVI)), se ha realizado la más amplia comparación encontrada hasta el momento que utiliza la metodologı́a de evaluación basada en medidas de similitud de particiones. Además, se ha analizado el comportamiento de los CVIs en una aplicación real de minerı́a web con un número elevado de clusters, contexto en el que los CVIs tienden a ser inestables, ası́ que se propuso un procedimiento para la selección automática de la mejor partición en base a la pendiente de los valores de diferentes CVIs.Grant of the Basque Government (ref.: BFI08.226); Grant of Ministry of Economy and Competitiveness of the Spanish Government (ref.: BES-2011-045989); Research stay grant of Spanish Ministry of Economy and Competitiveness (ref.: EEBB-I-14-08862); University of the Basque Country UPV/EHU (BAILab, grant UFI11/45); Department of Education, Universities and Research of the Basque Government (grant IT-395-10); Ministry of Economy and Competitiveness of the Spanish Government and by the European Regional Development Fund - ERDF (eGovernAbility, grant TIN2014-52665-C2-1-R)

    A CONTENT-ADDRESSABLE-MEMORY ASSISTED INTRUSION PREVENTION EXPERT SYSTEM FOR GIGABIT NETWORKS

    Get PDF
    Cyber intrusions have become a serious problem with growing frequency and complexity. Current Intrusion Detection/Prevention Systems (IDS/IPS) are deficient in speed and/or accuracy. Expert systems are one functionally effective IDS/IPS method. However, they are in general computationally intensive and too slow for real time requirements. This poor performance prohibits expert system's applications in gigabit networks. This dissertation describes a novel intrusion prevention expert system architecture that utilizes the parallel search capability of Content Addressable Memory (CAM) to perform intrusion detection at gigabit/second wire speed. A CAM is a parallel search memory that compares all of its entries against input data in parallel. This parallel search is much faster than the serial search operation in Random Access Memory (RAM). The major contribution of this thesis is to accelerate the expert system's performance bottleneck "match" processes using the parallel search power of a CAM, thereby enabling the expert systems for wire speed network IDS/IPS applications. To map an expert system's Match process into a CAM, this research introduces a novel "Contextual Rule" (C-Rule) method that fundamentally changes expert systems' computational structures without changing its functionality for the IDS/IPS problem domain. This "Contextual Rule" method combines expert system rules and current network states into a new type of dynamic rule that exists only under specific network state conditions. This method converts the conventional two-database match process into a one-database search process. Therefore it enables the core functionality of the expert system to be mapped into a CAM and take advantage of its search parallelism.This thesis also introduces the CAM-Assisted Intrusion Prevention Expert System (CAIPES) architecture and shows how it can support the vast majority of the rules in the 1999 Lincoln Lab's DARPA Intrusion Detection Evaluation data set, and rules in the open source IDS "Snort". Supported rules are able to detect single-packet attacks, abusive traffic and packet flooding attacks, sequences of packets attacks, and flooding of sequences attacks. Prototyping and simulation have been performed to demonstrate the detection capability of these four types of attacks. Hardware simulation of an existing CAM shows that the CAIPES architecture enables gigabit/s IDS/IPS

    The Unbalanced Classification Problem: Detecting Breaches in Security

    Get PDF
    This research proposes several methods designed to improve solutions for security classification problems. The security classification problem involves unbalanced, high-dimensional, binary classification problems that are prevalent today. The imbalance within this data involves a significant majority of the negative class and a minority positive class. Any system that needs protection from malicious activity, intruders, theft, or other types of breaches in security must address this problem. These breaches in security are considered instances of the positive class. Given numerical data that represent observations or instances which require classification, state of the art machine learning algorithms can be applied. However, the unbalanced and high-dimensional structure of the data must be considered prior to applying these learning methods. High-dimensional data poses a “curse of dimensionality” which can be overcome through the analysis of subspaces. Exploration of intelligent subspace modeling and the fusion of subspace models is proposed. Detailed analysis of the one-class support vector machine, as well as its weaknesses and proposals to overcome these shortcomings are included. A fundamental method for evaluation of the binary classification model is the receiver operating characteristic (ROC) curve and the area under the curve (AUC). This work details the underlying statistics involved with ROC curves, contributing a comprehensive review of ROC curve construction and analysis techniques to include a novel graphic for illustrating the connection between ROC curves and classifier decision values. The major innovations of this work include synergistic classifier fusion through the analysis of ROC curves and rankings, insight into the statistical behavior of the Gaussian kernel, and novel methods for applying machine learning techniques to defend against computer intrusion detection. The primary empirical vehicle for this research is computer intrusion detection data, and both host-based intrusion detection systems (HIDS) and network-based intrusion detection systems (NIDS) are addressed. Empirical studies also include military tactical scenarios

    Evaluating Resilience of Cyber-Physical-Social Systems

    Get PDF
    Nowadays, protecting the network is not the only security concern. Still, in cyber security, websites and servers are becoming more popular as targets due to the ease with which they can be accessed when compared to communication networks. Another threat in cyber physical social systems with human interactions is that they can be attacked and manipulated not only by technical hacking through networks, but also by manipulating people and stealing users’ credentials. Therefore, systems should be evaluated beyond cy- ber security, which means measuring their resilience as a piece of evidence that a system works properly under cyber-attacks or incidents. In that way, cyber resilience is increas- ingly discussed and described as the capacity of a system to maintain state awareness for detecting cyber-attacks. All the tasks for making a system resilient should proactively maintain a safe level of operational normalcy through rapid system reconfiguration to detect attacks that would impact system performance. In this work, we broadly studied a new paradigm of cyber physical social systems and defined a uniform definition of it. To overcome the complexity of evaluating cyber resilience, especially in these inhomo- geneous systems, we proposed a framework including applying Attack Tree refinements and Hierarchical Timed Coloured Petri Nets to model intruder and defender behaviors and evaluate the impact of each action on the behavior and performance of the system.Hoje em dia, proteger a rede não é a única preocupação de segurança. Ainda assim, na segurança cibernética, sites e servidores estão se tornando mais populares como alvos devido à facilidade com que podem ser acessados quando comparados às redes de comu- nicação. Outra ameaça em sistemas sociais ciberfisicos com interações humanas é que eles podem ser atacados e manipulados não apenas por hackers técnicos através de redes, mas também pela manipulação de pessoas e roubo de credenciais de utilizadores. Portanto, os sistemas devem ser avaliados para além da segurança cibernética, o que significa medir sua resiliência como uma evidência de que um sistema funciona adequadamente sob ataques ou incidentes cibernéticos. Dessa forma, a resiliência cibernética é cada vez mais discutida e descrita como a capacidade de um sistema manter a consciência do estado para detectar ataques cibernéticos. Todas as tarefas para tornar um sistema resiliente devem manter proativamente um nível seguro de normalidade operacional por meio da reconfi- guração rápida do sistema para detectar ataques que afetariam o desempenho do sistema. Neste trabalho, um novo paradigma de sistemas sociais ciberfisicos é amplamente estu- dado e uma definição uniforme é proposta. Para superar a complexidade de avaliar a resiliência cibernética, especialmente nesses sistemas não homogéneos, é proposta uma estrutura que inclui a aplicação de refinamentos de Árvores de Ataque e Redes de Petri Coloridas Temporizadas Hierárquicas para modelar comportamentos de invasores e de- fensores e avaliar o impacto de cada ação no comportamento e desempenho do sistema
    corecore