266 research outputs found
Challenges in the capture and dissemination of measurements from high-speed networks
The production of a large-scale monitoring system for a high-speed network leads to a number of challenges. These challenges are not purely technical but also socio-political and legal. The number of stakeholders in such monitoring activity is large including the network operators, the users, the equipment manufacturers and, of course, the monitoring researchers. The MASTS project (measurement at all scales in time and space) was created to instrument the high-speed JANET Lightpath network and has been extended to incorporate other paths supported by JANET(UK). Challenges the project has faced included: simple access to the network; legal issues involved in the storage and dissemination of the captured information, which may be personal; the volume of data captured and the rate at which these data appear at store. To this end, the MASTS system will have established four monitoring points each capturing packets on a high-speed link. Traffic header data will be continuously collected, anonymised, indexed, stored and made available to the research community. A legal framework for the capture and storage of network measurement data has been developed which allows the anonymised IP traces to be used for research purposes
InSight2: An Interactive Web Based Platform for Modeling and Analysis of Large Scale Argus Network Flow Data
Monitoring systems are paramount to the proactive detection and mitigation of problems in computer networks related to performance and security. Degraded performance and compromised end-nodes can cost computer networks downtime, data loss and reputation. InSight2 is a platform that models, analyzes and visualizes large scale Argus network flow data using up-to-date geographical data, organizational information, and emerging threats. It is engineered to meet the needs of network administrators with flexibility and modularity in mind. Scalability is ensured by devising multi-core processing by implementing robust software architecture. Extendibility is achieved by enabling the end user to enrich flow records using additional user provided databases. Deployment is streamlined by providing an automated installation script. State-of-the-art visualizations are devised and presented in a secure, user friendly web interface giving greater insight about the network to the end user
Flow Monitoring Explained: From Packet Capture to Data Analysis With NetFlow and IPFIX
Flow monitoring has become a prevalent method for monitoring traffic in high-speed networks. By focusing on the analysis of flows, rather than individual packets, it is often said to be more scalable than traditional packet-based traffic analysis. Flow monitoring embraces the complete chain of packet observation, flow export using protocols such as NetFlow and IPFIX, data collection, and data analysis. In contrast to what is often assumed, all stages of flow monitoring are closely intertwined. Each of these stages therefore has to be thoroughly understood, before being able to perform sound flow measurements. Otherwise, flow data artifacts and data loss can be the consequence, potentially without being observed. This paper is the first of its kind to provide an integrated tutorial on all stages of a flow monitoring setup. As shown throughout this paper, flow monitoring has evolved from the early 1990s into a powerful tool, and additional functionality will certainly be added in the future. We show, for example, how the previously opposing approaches of deep packet inspection and flow monitoring have been united into novel monitoring approaches
Towards Informative Statistical Flow Inversion
This is the accepted version of 'Towards Informative Statistical Flow Inversion', archived originally at arXiv:0705.1939v1 [cs.NI] 14 May 2007.A problem which has recently attracted research attention is that of estimating the distribution of flow sizes in internet traffic. On high traffic links it is sometimes impossible to record every packet. Researchers have approached the problem of estimating flow lengths from sampled packet data in two separate ways. Firstly, different sampling methodologies can be tried to more accurately measure the desired system parameters. One such method is the sample-and-hold method where, if a packet is sampled, all subsequent packets in that flow are sampled. Secondly, statistical methods can be used to ``invert'' the sampled data and produce an estimate of flow lengths from a sample. In this paper we propose, implement and test two variants on the sample-and-hold method. In addition we show how the sample-and-hold method can be inverted to get an estimation of the genuine distribution of flow sizes. Experiments are carried out on real network traces to compare standard packet sampling with three variants of sample-and-hold. The methods are compared for their ability to reconstruct the genuine distribution of flow sizes in the traffic
CHID : conditional hybrid intrusion detection system for reducing false positives and resource consumption on malicous datasets
Inspecting packets to detect intrusions faces challenges when coping with a high volume of network traffic. Packet-based detection processes every payload on the wire, which degrades the performance of network intrusion detection system (NIDS). This issue requires an introduction of a flow-based NIDS that reduces the amount of data to be processed by examining aggregated information of related packets.
However, flow-based detection still suffers from the generation of the false positive alerts due to incomplete data input. This study proposed a Conditional Hybrid Intrusion Detection (CHID) by combining the flow-based with packet-based detection. In addition, it is also aimed to improve the resource consumption of the packet-based detection approach. CHID applied attribute wrapper features evaluation algorithms that marked malicious flows for further analysis by the packet-based detection. Input Framework approach was employed for triggering packet flows between the packetbased and flow-based detections. A controlled testbed experiment was conducted to evaluate the performance of detection mechanism’s CHID using datasets obtained from on different traffic rates. The result of the evaluation showed that CHID gains a significant performance improvement in terms of resource consumption and packet drop rate, compared to the default packet-based detection implementation. At a 200 Mbps, CHID in IRC-bot scenario, can reduce 50.6% of memory usage and decreases 18.1% of the CPU utilization without packets drop. CHID approach can mitigate the
false positive rate of flow-based detection and reduce the resource consumption of packet-based detection while preserving detection accuracy. CHID approach can be considered as generic system to be applied for monitoring of intrusion detection systems
Recommended from our members
Traffic Analysis Attacks and Defenses in Low Latency Anonymous Communication
The recent public disclosure of mass surveillance of electronic communication, involving powerful government authorities, has drawn the public's attention to issues regarding Internet privacy. For almost a decade now, there have been several research efforts towards designing and deploying open source, trustworthy and reliable systems that ensure users' anonymity and privacy. These systems operate by hiding the true network identity of communicating parties against eavesdropping adversaries. Tor, acronym for The Onion Router, is an example of such a system. Such systems relay the traffic of their users through an overlay of nodes that are called Onion Routers and are operated by volunteers distributed across the globe. Such systems have served well as anti-censorship and anti-surveillance tools. However, recent publications have disclosed that powerful government organizations are seeking means to de-anonymize such systems and have deployed distributed monitoring infrastructure to aid their efforts.
Attacks against anonymous communication systems, like Tor, often involve trac analysis. In such attacks, an adversary, capable of observing network traffic statistics in several different networks, correlates the trac patterns in these networks, and associates otherwise seemingly unrelated network connections. The process can lead an adversary to the source of an anonymous connection. However, due to their design, consisting of globally distributed relays, the users of anonymity networks like Tor, can route their traffic virtually via any network; hiding their tracks and true identities from their communication peers and eavesdropping adversaries. De-anonymization of a random anonymous connection is hard, as the adversary is required to correlate traffic patterns in one network link to those in virtually all other networks. Past research mostly involved reducing the complexity of this process by rst reducing the set of relays or network routers to monitor, and then identifying the actual source of anonymous traffic among network connections that are routed via this reduced set of relays or network routers to monitor. A study of various research efforts in this field reveals that there have been many more efforts to reduce the set of relays or routers to be searched than to explore methods for actually identifying an anonymous user amidst the network connections using these routers and relays. Few have tried to comprehensively study a complete attack, that involves reducing the set of relays and routers to monitor and identifying the source of an anonymous connection. Although it is believed that systems like Tor are trivially vulnerable to traffic analysis, there are various technical challenges and issues that can become obstacles to accurately identifying the source of anonymous connection. It is hard to adjudge the vulnerability of anonymous communication systems without adequately exploring the issues involved in identifying the source of anonymous traffic.
We take steps to ll this gap by exploring two novel active trac analysis attacks, that solely rely on measurements of network statistics. In these attacks, the adversary tries to identify the source of an anonymous connection arriving to a server from an exit node. This generally involves correlating traffic entering and leaving the Tor network, linking otherwise unrelated connections. To increase the accuracy of identifying the victim connection among several connections, the adversary injects a traffic perturbation pattern into a connection arriving to the server from a Tor node, that the adversary wants to de-anonymize. One way to achieve this is by colluding with the server and injecting a traffic perturbation pattern using common traffic shaping tools. Our first attack involves a novel remote bandwidth estimation technique to conrm the identity of Tor relays and network routers along the path connecting a Tor client and a server by observing network bandwidth fluctuations deliberately injected by the server. The second attack involves correlating network statistics, for connections entering and leaving the Tor network, available from existing network infrastructure, such as Cisco's NetFlow, for identifying the source of an anonymous connection. Additionally, we explored a novel technique to defend against the latter attack. Most research towards defending against traffic analysis attacks, involving transmission of dummy traffic, have not been implemented due to fears of potential performance degradation. Our novel technique involves transmission of dummy traffic, consisting of packets with IP headers having small Time-to-Live (TTL) values. Such packets are discarded by the routers before they reach their destination. They distort NetFlow statistics, without degrading the client's performance. Finally, we present a strategy that employs transmission of unique plain-text decoy traffic, that appears sensitive, such as fake user credentials, through Tor nodes to decoy servers under our control. Periodic tallying of client and server logs to determine unsolicited connection attempts at the server is used to identify the eavesdropping nodes. Such malicious Tor node operators, eavesdropping on users' traffic, could be potential traffic analysis attackers
Performance Evaluation of Network Anomaly Detection Systems
Nowadays, there is a huge and growing concern about security in information and communication
technology (ICT) among the scientific community because any attack or anomaly in
the network can greatly affect many domains such as national security, private data storage,
social welfare, economic issues, and so on. Therefore, the anomaly detection domain is a broad
research area, and many different techniques and approaches for this purpose have emerged
through the years.
Attacks, problems, and internal failures when not detected early may badly harm an
entire Network system. Thus, this thesis presents an autonomous profile-based anomaly detection
system based on the statistical method Principal Component Analysis (PCADS-AD). This
approach creates a network profile called Digital Signature of Network Segment using Flow Analysis
(DSNSF) that denotes the predicted normal behavior of a network traffic activity through
historical data analysis. That digital signature is used as a threshold for volume anomaly detection
to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow
attributes: Bits, Packets and Number of Flows to detect problems, and Source and Destination IP
addresses and Ports, to provides the network administrator necessary information to solve them.
Via evaluation techniques, addition of a different anomaly detection approach, and
comparisons to other methods performed in this thesis using real network traffic data, results
showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection
accuracy on the detection schema.
The observed results seek to contribute to the advance of the state of the art in methods
and strategies for anomaly detection that aim to surpass some challenges that emerge from
the constant growth in complexity, speed and size of today’s large scale networks, also providing
high-value results for a better detection in real time.Atualmente, existe uma enorme e crescente preocupação com segurança em tecnologia
da informação e comunicação (TIC) entre a comunidade cientÃfica. Isto porque qualquer
ataque ou anomalia na rede pode afetar a qualidade, interoperabilidade, disponibilidade, e integridade
em muitos domÃnios, como segurança nacional, armazenamento de dados privados,
bem-estar social, questões econômicas, e assim por diante. Portanto, a deteção de anomalias
é uma ampla área de pesquisa, e muitas técnicas e abordagens diferentes para esse propósito
surgiram ao longo dos anos.
Ataques, problemas e falhas internas quando não detetados precocemente podem prejudicar
gravemente todo um sistema de rede. Assim, esta Tese apresenta um sistema autônomo
de deteção de anomalias baseado em perfil utilizando o método estatÃstico Análise de Componentes
Principais (PCADS-AD). Essa abordagem cria um perfil de rede chamado Assinatura Digital
do Segmento de Rede usando Análise de Fluxos (DSNSF) que denota o comportamento normal
previsto de uma atividade de tráfego de rede por meio da análise de dados históricos. Essa
assinatura digital é utilizada como um limiar para deteção de anomalia de volume e identificar
disparidades na tendência de tráfego normal. O sistema proposto utiliza sete atributos de fluxo
de tráfego: bits, pacotes e número de fluxos para detetar problemas, além de endereços IP e
portas de origem e destino para fornecer ao administrador de rede as informações necessárias
para resolvê-los.
Por meio da utilização de métricas de avaliação, do acrescimento de uma abordagem
de deteção distinta da proposta principal e comparações com outros métodos realizados nesta
tese usando dados reais de tráfego de rede, os resultados mostraram boas previsões de tráfego
pelo DSNSF e resultados encorajadores quanto a geração de alarmes falsos e precisão de deteção.
Com os resultados observados nesta tese, este trabalho de doutoramento busca contribuir
para o avanço do estado da arte em métodos e estratégias de deteção de anomalias,
visando superar alguns desafios que emergem do constante crescimento em complexidade, velocidade
e tamanho das redes de grande porte da atualidade, proporcionando também alta
performance. Ainda, a baixa complexidade e agilidade do sistema proposto contribuem para
que possa ser aplicado a deteção em tempo real
- …