8 research outputs found

    Cyber Blackbox for collecting network evidence

    Get PDF
    In recent years, the hottest topics in the security field are related to the advanced and persistent attacks. As an approach to solve this problem, we propose a cyber blackbox which collects and preserves network traffic on a virtual volume based WORM device, called EvidenceLock to ensure data integrity for security and forensic analysis. As a strategy to retain traffic for long enough periods, we introduce a deduplication method. Also this paper includes a study on the network evidence which is collected and preserved for analyzing the cause of cyber incident. Then, a method is proposed to suggest a starting point for incident analysis to a forensic practitioner who has to investigate on the vast amount of network traffic collected using the cyber blackbox. Experimental results show this approach is effectively able to reduce the amount of data to search by dividing doubtful flows from normal traffic. Finally, we discuss the results with the forensically meaningful point of view and present further works

    Packet storage at multi-gigabit rates using off-the-shelf systems

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. V. Moreno et al., "Packet Storage at Multi-gigabit Rates Using Off-the-Shelf Systems," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, Paris, 2014, pp. 486-489. doi: 10.1109/HPCC.2014.81The use of closed solutions from most known vendors to carry out network-monitoring tasks has turned out to be a questionable option given their lack of flexibility and extensibility, which has typically been translated into higher costs. Consequently, we study whether high-performance monitoring tasks can be carried out using off-the-shelf systems, the alternative to these pitfalls from the research community, consisting in the combination of open-source software and commodity hardware. We focus on sniffing and storing network traffic as one of the major tasks in any monitoring architecture. Specifically, we first review the keys to sniff traffic at multi-gigabit rates, and then present an experimental evaluation of commodity hard drives. Finally, the lessons learned from such studies and the performed experiments have conducted us to the development of an open solution, namely HPCAP, which sniffs and stores multi-gigabit traffic using commodity hardware without packet losses in very demanding scenarios.This research was carried out with the support of the EU FP7 OpenLab project (Grant No. 287581), the Spanish National I+D Packtrack project (TEC2012-33754) and Universidad Autónoma de Madrid’s multidisciplinary project Implementación de Modelos Computacionales Masivamente Paralelos (CEMU-2013-14)

    Cyber Black Box: Network intrusion forensics system for collecting and preserving evidence of attack

    Get PDF
    Once the system is compromised, the forensics and investigation are always executed after the attacks and the loss of some useful instant evidence. Since there is no log information necessary for analyzing an attack cause after the cyber incident occurs, it is difficult to analyze the cause of an intrusion even after an intrusion event is recognized. Moreover, in an advanced cyber incident such as advanced persistent threats, several months or more are expended in only analyzing a cause, and it is difficult to find the cause with conventional security equipment. In this paper, we introduce a network intrusion forensics system for collecting and preserving the evidence of an intrusion, it is called Cyber Black Box that is deployed in Local Area Network environment. It quickly analyzes a cause of an intrusion event when the intrusion event occurs, and provides a function of collecting evidence data of the intrusion event. The paper also describes the experimental results of the network throughput performance by deploying our proposed system in an experimental testbed environment

    A Survey on Big Data for Network Traffic Monitoring and Analysis

    Get PDF
    Network Traffic Monitoring and Analysis (NTMA) represents a key component for network management, especially to guarantee the correct operation of large-scale networks such as the Internet. As the complexity of Internet services and the volume of traffic continue to increase, it becomes difficult to design scalable NTMA applications. Applications such as traffic classification and policing require real-time and scalable approaches. Anomaly detection and security mechanisms require to quickly identify and react to unpredictable events while processing millions of heterogeneous events. At last, the system has to collect, store, and process massive sets of historical data for post-mortem analysis. Those are precisely the challenges faced by general big data approaches: Volume, Velocity, Variety, and Veracity. This survey brings together NTMA and big data. We catalog previous work on NTMA that adopt big data approaches to understand to what extent the potential of big data is being explored in NTMA. This survey mainly focuses on approaches and technologies to manage the big NTMA data, additionally briefly discussing big data analytics (e.g., machine learning) for the sake of NTMA. Finally, we provide guidelines for future work, discussing lessons learned, and research directions

    Dasflow : uma arquitetura distribuída de armazenamento e processamento para dados de monitoramento de rede

    Get PDF
    Orientador : Prof. Dr. Carmem Satie HaraDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 24/09/2015Inclui referências : f. 88-92Resumo: O monitoramento de redes é uma atividade pertencente à área de gerência de redes, na qual efetua-se a coleta, armazenamento, processamento e análise dos dados de monitoramento. Para realizar esta atividade, existem ferramentas que implementam tais funcionalidades. Grande parte destas ferramentas são baseadas na arquitetura centralizada, e entre elas encontra-se o NfSen/Nfdump. Esta ferramenta é amplamente utilizada pelos administradores de rede por possuir boa documentação e ser de código aberto. O modelo centralizado possui limitações associadas à escalabilidade que são inerentes à arquitetura. Entre elas está a falta de redundância, um ponto único de falha e a ausência de balanceamento de carga. Isto significa que as ferramentas com arquitetura centralizada estão sujeitas a essas limitações, ou seja, existem limites no volume de armazenamento bem como na sua capacidade de coleta e processamento. Na literatura, encontram-se soluções para este problema baseadas em compressão dos dados e armazenamento distribuído de dados. Nesta dissertação, é proposta uma arquitetura distribuída chamada DASFlow aplicada à ferramenta de monitoramento de rede NfSen/Nfdump, cujo o objetivo é prover escalabilidade de coleta, armazenamento e processamento. Para isso, a arquitetura define os módulos StoreDAS-Cliente e StoreDAS-Servidor que atuam em conjunto com o sistema de arquivos distribuído (SAD) para prover escalabilidade de coleta e armazenamento. A escalabilidade de processamento é fornecida pelos módulos QueryDAS-Cliente e QueryDAS-Servidor. A arquitetura também prevê a existência do módulo de Metadados responsável por manter as informações sobre o armazenamento e distribuição dos dados de monitoramento. Os resultados experimentais mostram o potencial da arquitetura proposta. O DASFlow obteve menores tempos de resposta para o processamento das consultas mais frequentes que variam entre 13% e 34%, se comparados à ferramenta NfSen/Nfdump. Adicionalmente, a adoção de um sistema de arquivos distribuído mostrase eficaz ao prover escalabilidade para o armazenamento dos dados de monitoramento de rede.Abstract: Network monitoring is one of the activities of the network management field, in which one collects, stores, processes and analyzes monitoring data. It relies on tools that implement such functionalities. Many of these tools are based on a centralized architecture, and among them is NfSen/Nfdump. NfSen/Nfdump is widely used among network administrators, due to a good documentation and the fact that it is open source. The centralized model has some limitations with respect to scalability, which are inherent to the architecture. One of them is the lack of redundancy, single point of failure and the absence of load balancing. As a result, centralized architecture tools are subject to the some limitations. That is, storage capacity is limited as well as the ability to collect and process data. Solutions to solve these problems, based on data compression and distributed data storage, can be found in the literature. In this dissertation, we propose a distributed architecture called DASFlow, which provides scalability for data collection, storage and processing. In this regard, the architecture defines the StoreDAS-Cliente and StoreDASServidor modules, which work together with a distributed file system (DFS) in order to provide data collection and storage scalability. Processing scalability is provided by the QueryDAS-Cliente and QueryDAS-Servidor modules. The architecture also contains a Metadata module, responsible for keeping the information about storage and distribution of monitoring data. The architecture has been implemented with the NfSen/Nfdump network monitoring tool and Ceph distributed file system. The experimental results show that the DASFlow architecture has achieved shorter response times for the processing of the most frequent queries, which vary between 13% and 34%, if compared to the original NfSen/Nfdump tool. Additionally, the use of a distributed file system has proved to be efficient in providing scalability for the storage of network monitoring data

    Harnessing low-level tuning in modern architectures for high-performance network monitoring in physical and virtual platforms

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: 02-07-201
    corecore