9 research outputs found

    Enforcing Security with Behavioral Fingerprinting

    Get PDF
    International audienceAlthough fingerprinting techniques are helpful for security assessment, they have limited support to advanced security related applications. We have developed a new security framework focusing especially on the authentication reinforce- ment and the automatic generation of stateful firewall rules based on behavioral fingerprinting. Such fingerprinting is highly effective in capturing sequential patterns in the behavior of a device. A new machine learning technique is also adapted to monitor high speed networks by evaluating both computational complexity and experimented performances

    Anomaly-based network intrusion detection: Techniques, systems and challenges.

    Get PDF
    Threat Intrusion detection Anomaly detection IDS systems and platforms Assessment a b s t r a c t The Internet and computer networks are exposed to an increasing number of security threats. With new types of attacks appearing continually, developing flexible and adaptive security oriented approaches is a severe challenge. In this context, anomaly-based network intrusion detection techniques are a valuable technology to protect target systems and networks against malicious activities. However, despite the variety of such methods described in the literature in recent years, security tools incorporating anomaly detection functionalities are just starting to appear, and several important problems remain to be solved. This paper begins with a review of the most well-known anomaly-based intrusion detection techniques. Then, available platforms, systems under development and research projects in the area are presented. Finally, we outline the main challenges to be dealt with for the wide scale deployment of anomaly-based intrusion detectors, with special emphasis on assessment issues. ª 2008 Elsevier Ltd. All rights reserved. Introduction Intrusion Detection Systems (IDS) are security tools that, like other measures such as antivirus software, firewalls and access control schemes, are intended to strengthen the security of information and communication systems. Although, as shown i

    An adaptive and distributed intrusion detection scheme for cloud computing

    Get PDF
    Cloud computing has enormous potentials but still suffers from numerous security issues. Hence, there is a need to safeguard the cloud resources to ensure the security of clients’ data in the cloud. Existing cloud Intrusion Detection System (IDS) suffers from poor detection accuracy due to the dynamic nature of cloud as well as frequent Virtual Machine (VM) migration causing network traffic pattern to undergo changes. This necessitates an adaptive IDS capable of coping with the dynamic network traffic pattern. Therefore, the research developed an adaptive cloud intrusion detection scheme that uses Binary Segmentation change point detection algorithm to track the changes in the normal profile of cloud network traffic and updates the IDS Reference Model when change is detected. Besides, the research addressed the issue of poor detection accuracy due to insignificant features and coordinated attacks such as Distributed Denial of Service (DDoS). The insignificant feature was addressed using feature selection while coordinated attack was addressed using distributed IDS. Ant Colony Optimization and correlation based feature selection were used for feature selection. Meanwhile, distributed Stochastic Gradient Decent and Support Vector Machine (SGD-SVM) were used for the distributed IDS. The distributed IDS comprised detection units and aggregation unit. The detection units detected the attacks using distributed SGD-SVM to create Local Reference Model (LRM) on various computer nodes. Then, the LRM was sent to aggregation units to create a Global Reference Model. This Adaptive and Distributed scheme was evaluated using two datasets: a simulated datasets collected using Virtual Machine Ware (VMWare) hypervisor and Network Security Laboratory-Knowledge Discovery Database (NSLKDD) benchmark intrusion detection datasets. To ensure that the scheme can cope with the dynamic nature of VM migration in cloud, performance evaluation was performed before and during the VM migration scenario. The evaluation results of the adaptive and distributed scheme on simulated datasets showed that before VM migration, an overall classification accuracy of 99.4% was achieved by the scheme while a related scheme achieved an accuracy of 83.4%. During VM migration scenario, classification accuracy of 99.1% was achieved by the scheme while the related scheme achieved an accuracy of 85%. The scheme achieved an accuracy of 99.6% when it was applied to NSL-KDD dataset while the related scheme achieved an accuracy of 83%. The performance comparisons with a related scheme showed that the developed adaptive and distributed scheme achieved superior performance

    Network Intrusion Detection and Prevention Systems in Educational Systems : A case of Yaba College of Technology

    Get PDF
    Nwogu, Emeka Joshua. 2012. Network Intrusion Detection and Prevention Systems in Educational Systems - A case of Yaba College of Technology. Bachelor’s Thesis. Kemi-Tornio University of Applied Sciences. Business and Culture. Pages 66. Appendix 1. The objective of this thesis work is to put forward a solution for improving the security network of Yaba College of Technology (YCT). This work focuses on implementation of a network intrusion detection and prevention system (IDPS), due to constant intrusions on the YCT’s network. Various networks attacks and their mitigation techniques are also discussed, to give a clear picture of intrusions. The work will help the College’s administrators to become increasingly cautions of attacks and perform regular risk analyses. The research methodologies used in this work are descriptive and exploratory research. In addition, a questionnaire survey and interviews were used to collect data necessary for in-depth knowledge of the intrusions in the College. The choice of the research methods was found relevant for the current work. Furthermore, the researcher intended to gain an increased understanding of and provide a detailed picture of IDPS and the issues to consider when implementing the system. Network intrusion has been a security issue since the inception of the computer systems and the Internet. When breaking into a computer or network system, confidentiality, integrity and availability (CIA) are the three most aspect of security that are targets for intruders. The CIA, important aspects of security, and other network resources, need to be well protected using robust security devices. Based on the research tests and results, this thesis proposes implementation of IDPS on the College’s network, which is an essential aspect of securing information and network resources

    Network Analysis with Stochastic Grammars

    Get PDF
    Digital forensics requires significant manual effort to identify items of evidentiary interest from the ever-increasing volume of data in modern computing systems. One of the tasks digital forensic examiners conduct is mentally extracting and constructing insights from unstructured sequences of events. This research assists examiners with the association and individualization analysis processes that make up this task with the development of a Stochastic Context -Free Grammars (SCFG) knowledge representation for digital forensics analysis of computer network traffic. SCFG is leveraged to provide context to the low-level data collected as evidence and to build behavior profiles. Upon discovering patterns, the analyst can begin the association or individualization process to answer criminal investigative questions. Three contributions resulted from this research. First , domain characteristics suitable for SCFG representation were identified and a step -by- step approach to adapt SCFG to novel domains was developed. Second, a novel iterative graph-based method of identifying similarities in context-free grammars was developed to compare behavior patterns represented as grammars. Finally, the SCFG capabilities were demonstrated in performing association and individualization in reducing the suspect pool and reducing the volume of evidence to examine in a computer network traffic analysis use case

    Performance Metrics for Network Intrusion Systems

    Get PDF
    Intrusion systems have been the subject of considerable research during the past 33 years, since the original work of Anderson. Much has been published attempting to improve their performance using advanced data processing techniques including neural nets, statistical pattern recognition and genetic algorithms. Whilst some significant improvements have been achieved they are often the result of assumptions that are difficult to justify and comparing performance between different research groups is difficult. The thesis develops a new approach to defining performance focussed on comparing intrusion systems and technologies. A new taxonomy is proposed in which the type of output and the data scale over which an intrusion system operates is used for classification. The inconsistencies and inadequacies of existing definitions of detection are examined and five new intrusion levels are proposed from analogy with other detection-based technologies. These levels are known as detection, recognition, identification, confirmation and prosecution, each representing an increase in the information output from, and functionality of, the intrusion system. These levels are contrasted over four physical data scales, from application/host through to enterprise networks, introducing and developing the concept of a footprint as a pictorial representation of the scope of an intrusion system. An intrusion is now defined as “an activity that leads to the violation of the security policy of a computer system”. Five different intrusion technologies are illustrated using the footprint with current challenges also shown to stimulate further research. Integrity in the presence of mixed trust data streams at the highest intrusion level is identified as particularly challenging. Two metrics new to intrusion systems are defined to quantify performance and further aid comparison. Sensitivity is introduced to define basic detectability of an attack in terms of a single parameter, rather than the usual four currently in use. Selectivity is used to describe the ability of an intrusion system to discriminate between attack types. These metrics are quantified experimentally for network intrusion using the DARPA 1999 dataset and SNORT. Only nine of the 58 attack types present were detected with sensitivities in excess of 12dB indicating that detection performance of the attack types present in this dataset remains a challenge. The measured selectivity was also poor indicting that only three of the attack types could be confidently distinguished. The highest value of selectivity was 3.52, significantly lower than the theoretical limit of 5.83 for the evaluated system. Options for improving selectivity and sensitivity through additional measurements are examined.Stochastic Systems Lt

    Network Intrusion Detection Using Iterative Heuristics

    Get PDF

    Procedimientos de explotación de información para la identificación de datos faltantes, con ruido e inconsistentes

    Get PDF
    La información es uno de los activos más importantes que tienen las empresas y es necesario garantizar la gobernanza de la tecnología de la información, la calidad de las bases de datos es uno de los elementos fundamentales para lograr esa gobernanza. Un auditor de sistemas dará empleo a muchas técnicas, procesos y herramientas para identificar los datos faltantes, con ruido e inconsistentes en una base de datos, la minería de datos es uno de esos medio a través del cual el auditor puede analizar la información. Dada la enorme cantidad de información que contienen los sistemas software es que los auditores deben emplear procedimientos que automaticen la detección de datos anómalos. Varios algoritmos de minería de datos han sido utilizados en la detección de tuplas consideradas anómalas, el problema es que no se encuentran antecedentes de algoritmos o procedimientos que permitan detectar específicamente dentro de una tupla que campo es el que contiene valores anómalos, siendo esta detección de fundamental importancia en las grandes bases de datos ya que si no es necesario hacer esta tarea en forma manual, requiriendo tiempo y una capacitación especifica por parte del auditor. El objetivo de la tesis es establecer una taxonomía relacionada con los métodos, técnicas y algoritmos de detección de valores anómalos en bases de datos. Y diseñar y validar procedimientos de explotación de información que combinados entre sí permitan detectar los campos que tienen valores atípicos en bases de datos, para mejorar la calidad de los datos. Se detectan tres enfoques diferentes relacionados con la Minería de Datos para detectar datos anómalos, el enfoque no supervisado, el enfoque supervisado y el enfoque semi-supervisado. Esta tesis desarrolla cuatro procedimientos de explotación de información para detectar en forma automática que campo específicamente tiene valores que son considerados anómalos utilizando una metodología hibrida que combina algoritmos de distintos enfoques para realizar la tarea, estos cuatro procedimientos se relacionan con bases de datos numéricas con o sin atributos Target, bases de datos alfanuméricas sin atributo target y bases de datos alfanuméricas con atributos target. Se realizaron pruebas experimentales para validar los resultados utilizando bases de datos de laboratorio y bases de datos reales, demostrándose la eficacia de los procedimientos propuestos. La integración de distintos algoritmos no solo permiten detectar los campos considerados faltantes, con ruido e inconsistentes, sino que minimiza los posibles errores que pueda tener un algoritmo ante tan diversos e inciertos escenarios a los que debe enfrentarse la tarea de un auditor
    corecore