70 research outputs found

    Flow Monitoring Explained: From Packet Capture to Data Analysis With NetFlow and IPFIX

    Get PDF
    Flow monitoring has become a prevalent method for monitoring traffic in high-speed networks. By focusing on the analysis of flows, rather than individual packets, it is often said to be more scalable than traditional packet-based traffic analysis. Flow monitoring embraces the complete chain of packet observation, flow export using protocols such as NetFlow and IPFIX, data collection, and data analysis. In contrast to what is often assumed, all stages of flow monitoring are closely intertwined. Each of these stages therefore has to be thoroughly understood, before being able to perform sound flow measurements. Otherwise, flow data artifacts and data loss can be the consequence, potentially without being observed. This paper is the first of its kind to provide an integrated tutorial on all stages of a flow monitoring setup. As shown throughout this paper, flow monitoring has evolved from the early 1990s into a powerful tool, and additional functionality will certainly be added in the future. We show, for example, how the previously opposing approaches of deep packet inspection and flow monitoring have been united into novel monitoring approaches

    No NAT'd User left Behind: Fingerprinting Users behind NAT from NetFlow Records alone

    Full text link
    It is generally recognized that the traffic generated by an individual connected to a network acts as his biometric signature. Several tools exploit this fact to fingerprint and monitor users. Often, though, these tools assume to access the entire traffic, including IP addresses and payloads. This is not feasible on the grounds that both performance and privacy would be negatively affected. In reality, most ISPs convert user traffic into NetFlow records for a concise representation that does not include, for instance, any payloads. More importantly, large and distributed networks are usually NAT'd, thus a few IP addresses may be associated to thousands of users. We devised a new fingerprinting framework that overcomes these hurdles. Our system is able to analyze a huge amount of network traffic represented as NetFlows, with the intent to track people. It does so by accurately inferring when users are connected to the network and which IP addresses they are using, even though thousands of users are hidden behind NAT. Our prototype implementation was deployed and tested within an existing large metropolitan WiFi network serving about 200,000 users, with an average load of more than 1,000 users simultaneously connected behind 2 NAT'd IP addresses only. Our solution turned out to be very effective, with an accuracy greater than 90%. We also devised new tools and refined existing ones that may be applied to other contexts related to NetFlow analysis

    Tietoverkkojen valvonnan yhdenmukaistaminen

    Get PDF
    As the modern society is increasingly dependant on computer networks especially as the Internet of Things gaining popularity, a need to monitor computer networks along with associated devices increases. Additionally, the amount of cyber attacks is increasing and certain malware such as Mirai target especially network devices. In order to effectively monitor computer networks and devices, effective solutions are required for collecting and storing the information. This thesis designs and implements a novel network monitoring system. The presented system is capable of utilizing state-of-the-art network monitoring protocols and harmonizing the collected information using a common data model. This design allows effective queries and further processing on the collected information. The presented system is evaluated by comparing the system against the requirements imposed on the system, by assessing the amount of harmonized information using several protocols and by assessing the suitability of the chosen data model. Additionally, the protocol overheads of the used network monitoring protocols are evaluated. The presented system was found to fulfil the imposed requirements. Approximately 21% of the information provided by the chosen network monitoring protocols could be harmonized into the chosen data model format. The result is sufficient for effective querying and combining the information, as well as for processing the information further. The result can be improved by extending the data model and improving the information processing. Additionally, the chosen data model was shown to be suitable for the use case presented in this thesis.Yhteiskunnan ollessa jatkuvasti verkottuneempi erityisesti Esineiden Internetin kasvattaessa suosiotaan, tarve seurata sekä verkon että siihen liitettyjen laitteiden tilaa ja mahdollisia poikkeustilanteita kasvaa. Lisäksi tietoverkkohyökkäysten määrä on kasvamassa ja erinäiset haittaohjelmat kuten Mirai, ovat suunnattu erityisesti verkkolaitteita kohtaan. Jotta verkkoa ja sen laitteiden tilaa voidaan seurata, tarvitaan tehokkaita ratkaisuja tiedon keräämiseen sekä säilöntään. Tässä diplomityössä suunnitellaan ja toteutetaan verkonvalvontajärjestelmä, joka mahdollistaa moninaisten verkonvalvontaprotokollien hyödyntämisen tiedonkeräykseen. Lisäksi järjestelmä säilöö kerätyn tiedon käyttäen yhtenäistä tietomallia. Yhtenäisen tietomallin käyttö mahdollistaa tiedon tehokkaan jatkojalostamisen sekä haut tietosisältöihin. Diplomityössä esiteltävän järjestelmän ominaisuuksia arvioidaan tarkastelemalla, minkälaisia osuuksia eri verkonvalvontaprotokollien tarjoamasta informaatiosta voidaan yhdenmukaistaa tietomalliin, onko valittu tietomalli soveltuva verkonvalvontaan sekä varmistetaan esiteltävän järjestelmän täyttävän sille asetetut vaatimukset. Lisäksi työssä arvioidaan käytettävien verkonvalvontaprotokollien siirtämisen kiinteitä kustannuksia kuten otsakkeita. Työssä esitellyn järjestelmän todettiin täyttävän sille asetetut vaatimukset. Eri verkonvalvontaprotokollien tarjoamasta informaatiosta keskimäärin 21% voitiin harmonisoida tietomalliin. Saavutettu osuus on riittävä, jotta eri laitteista saatavaa informaatiota voidaan yhdistellä ja hakea tehokkaasti. Lukemaa voidaan jatkossa parantaa laajentamalla tietomallia sekä kehittämällä kerätyn informaation prosessointia. Lisäksi valittu tietomalli todettiin soveltuvaksi tämän diplomityön käyttötarkoitukseen

    Flow Data Collection in Large Scale Networks

    Get PDF
    In this chapter, we present flow-based network traffic monitoring of large scale networks. Continuous Internet traffic increase requires a deployment of advanced monitoring techniques to provide near real-time and long-term network visibility. Collected flow data can be further used for network behavioral analysis to indicate legitimate and malicious traffic, proving cyber threats, etc. An early warning system should integrate flow-based monitoring to ensure network situational awareness.Kapitola představuje monitorování síťového provozu v rozsáhlých počítačových sítích založené na IP tocích. Nepřetržitý růst internetového provozu vyžaduje nasazení pokročilých monitorovacích technik, které poskytují v reálném čase a dlouhodobě pohled na dění v síti. Nasbíraná data mohou dále sloužit pro analýzu chování sítě k rozlišení legitimního a škodlivého provozu, dokazování kybernetických hrozeb atd. Systém včasného varování by měl integrovat monitorování síťových toků, aby mohl poskytovat přehled o situaci na síti

    Insights into the issue in IPv6 adoption: a view from the Chinese IPv6 Application mix

    Get PDF
    Published onlineThis is the author accepted manuscript. The final version is available from Wiley via the DOI in this record.Although IPv6 has been standardized more than 15 years ago, its deployment is still very limited. China has been strongly pushing IPv6, especially due to its limited IPv4 address space. In this paper, we describe measurements from a large Chinese academic network, serving a significant population of IPv6 hosts. We show that despite its expected strength, China is struggling as much as the western world to increase the share of IPv6 traffic. To understand the reasons behind this, we examine the IPv6 applicative ecosystem. We observe a significant IPv6 traffic growth over the past 3 years, with P2P file transfers responsible for more than 80% of the IPv6 traffic, compared with only 15% for IPv4 traffic. Checking the top websites for IPv6 explains the dominance of P2P, with popular P2P trackers appearing systematically among the top visited sites, followed by Chinese popular services (e.g., Tencent), as well as surprisingly popular third-party analytics including Google. Finally, we compare the throughput of IPv6 and IPv4 flows. We find that a larger share of IPv4 flows get a high-throughput compared with IPv6 flows, despite IPv6 traffic not being rate limited. We explain this through the limited amount of HTTP traffic in IPv6 and the presence of Web caches in IPv4. Our findings highlight the main issue in IPv6 adoption, that is, the lack of commercial content, which biases the geographic pattern and flow throughput of IPv6 traffic. Copyright © 2014 John Wiley & Sons, Ltd

    Practical Experiences of Building an IPFIX Based Open Source Botnet Detector

    Get PDF
    The academic study of flow-based malware detection has primarily focused on NetFlow v5 and v9. In 2013 IPFIX was ratified as the flow export standard. As part of a larger project to develop protection methods for Cloud Service Providers from botnet threats, this paper considers the challenges involved in designing an open source IPFIX based botnet detection function. This paper describes how these challenges were overcome and presents an open source system built upon Xen hypervisor and Open vSwitch that is able to display botnet traffic within Cloud Service Provider-style virtualised environments. The system utilises Euler property graphs to display suspect “botnests”. The conceptual framework presented provides a vendor-neutral, real-time detection mechanism for monitoring botnet communication traffic within cloud architectures and the Internet of Things

    Exporting IP flows using IPFIX : Master Thesis

    Get PDF
    Todays computer networks are continuously expanding both in size and capacity to accommodate the demands of the traffic they are designed to handle. Depending on the needs of the network operator, different aspects of this traffic needs to be measured and analyzed. Processing the full amount of data on the network would be a daunting task, and to avoid this only certain statistics describing the individual packets are collected. This data is then aggregated into "flows", based on criteria from the network operator. IPFIX is a recent IETF effort to standardize a protocol for exporting such flows to a central node for analyzation. But to effectively utilize a system implementing this protocol, one needs to know the impact of the protocol itself on the underlying network and consequently the traffic that flows through it. This document will explore the performance, capabilities and limitations of the IPFIX protocol. A packet-capture system utilizing the IPFIX protocol for flow export, will be set up in a controlled environment, and traffic will be generated in a predictable manner. Measurements indicate IPFIX to be a fairly flexible protocol for exporting various traffic characteristics, but that it also has scalability issues when deployed in larger, high-capacity networks

    Video Quality Monitoring Using NetFlow

    Get PDF
    Tato bakalářská práce vznikla za účelem vytvoření nástrojů pro sledování kvality video přenosu na Internetu. Cílem je vytvořit takové nástroje, které automaticky rozpoznají přenos videa na Internetu z předloženého souboru v daném formátua následně provedou jeho analýzu. Vybrané statistiky o video kvalitě posílá nástroj pomocí architektury klient-server na kolektor, kde server statistiky zpracuje a rozšíří relevantní Netflow IPFIX záznamy. Pojednává tedy i o problematice kódování videa, zapouzdření paketů a internetových protokolech souvisejících s daným tématem. Systém je psaný v jazyce C pro unixový operační systém.This bachelor's thesis is about creating new tools for a video transfer quality monitoring. It consists of a client-server architecture, where the client is gathering video quality statistics and passes those statistics to the server. The server updates relevant NetFlow IPFIX records with these statistics. The project includes video encoding, packet encapsulation and Internet protocols related to this topic. The architecture is written in a C language for a UNIX operating system.
    • …
    corecore