23 research outputs found

    Design and Evaluation of HTTP Protocol Parsers for IPFIX Measurement

    Get PDF
    In this paper we analyze HTTP protocol parsers that provide a web traffic visibility to IP flow. Despite extensive work, flow meters generally fall short of performance goals due to extracting application layer data. Constructing effective protocol parser for in-depth analysis is a challenging and error-prone affair. We designed and evaluated several HTTP protocol parsers representing current state-of-the-art approaches used in today's flow meters. We show the packet rates achieved by respective parsers, including the throughput decrease (performance implications of application parser) which is of the utmost importance for high-speed deployments.Článek zkoumá syntaktické analyzátory HTTP protokolu, které rozšiřují síťové toky o informace z webového provozu. Výpočetně náročné získávání informací z aplikační vrstvy má vliv na výkon měřicích sond a může vést k výraznému poklesu jejich výkonu. Tvorba rychlého analyzátoru pro hloubkovou analýzu HTTP protokolu je náročný proces s ohledem na správné a včasné zpracování dat ze sítě. Článek porovnává naši implementaci vybraných metod pro analýzu HTTP provozu, které jsou v současnosti používány v sondách na měření síťových toků. Provedená měření ukazují množství paketů, které je sonda schopna zpracovat při zapnuté analýze HTTP a jak toto množství klesá oproti sondám, které analýzu aplikačních protokolů neprovádějí. Jedná se o významnou informaci pro nasazení měřicích sond ve vysokorychlostních sítích

    Next Generation Application-Aware Flow Monitoring

    Get PDF
    Deep packet inspection (DPI) and IP flow monitoring are frequently used network monitoring approaches. Although the DPI provides application visibility, detailed examination of every packet is computationally intensive. The IP flow monitoring achieves high performance by processing only packet headers, but provides less details about the traffic itself. Application-aware flow monitoring is proposed as an attempt to combine DPI accuracy and IP flow monitoring performance. However, the impacts, benefits and disadvantages of application flow monitoring have not been studied in detail yet. The work proposed in this paper attempts to rectify this lack of research. We also propose a next generation flow measurement for application monitoring. The flows will represent events within the application protocol, e.g., web page download, instead of packet stream. Finally, we will investigate the performance of different approaches to application classification and application parsing with a computational complexity in mind.Často používané metody monitorování sítě jsou hloubková inspekce paketů (DPI) a monitorování síťových IP toků. Přestože DPI poskytuje formace z aplikační vrstvy, podrobné zkoumání každého paketu je výpočetně náročné. Monitorování pomocí IP síťových toků dosahuje vysokého výkonu, protože zpracovává pouze záhlaví paketů, ale také poskytuje méně informací o provozu samotném. Měření aplikačních síťových toků bylo navrženo jako pokus zkombinovat přesnost DPI a výkon IP síťových toků. Dopady, výhody a nevýhody aplikačních síťových toků zatím ale nebyly dostatečně prostudovány. Práce navrhovaná v tomto článku se snaží napravit tento nedostatek výzkumu. Navíc také navrhujeme další generaci měření síťových toků. Tyto toky budou místo sekvence paketů reprezentovat události v aplikačních protokolech, například stažení webové stránky. Budeme se také zabývat výkonem a výpočetní složitostí různých přístupů ke klasifikaci provozu a zpracování aplikačních protokolů

    Cyber Situation Awareness via IP Flow Monitoring

    Get PDF
    Cyber situation awareness has been recognized as a vital requirement for effective cyber defense. Cyber situation awareness allows cybersecurity operators to identify, understand, and anticipate incoming threats. Achieving and maintaining the cyber situation awareness is a challenging task given the continuous evolution of the computer networks, increasing volume and speeds of the data in a network, and rising number of threats to network security. Our work contributes to the continuous evolution of cyber situation awareness by the research of novel approaches to the perception and comprehension of a computer network. We concentrate our research efforts on the domain of IP flow network monitoring. We propose improvements to the IP flow monitoring techniques that enable the enhanced perception of a computer network. Further, we conduct detailed analyses of network traffic, which allows for an in-depth understanding of host behavior in a computer network. Last but not least, we propose a novel approach to IP flow network monitoring that enables real-time cyber situation awareness

    EventFlow: Network Flow Aggregation Based on User Actions

    Get PDF
    Network flow monitoring is being supplemented with an application flow visibility to provide more detailed information about network traffic. However, the current concept of flows does not provide a mechanism to keep track of semantic relations between individual flows that are created as a part of a single user action. We propose an extension to the flow measurement, called EventFlow, which allows to preserve relations between HTTP and DNS application flows that are a part of single user action, most typically browsing a web page. We describe an architecture of the EventFlow extension and its limitations. A prototype implementation of the EventFlow is introduced and evaluated on a packet trace from an ISP network. We show that a significant number of flow records can be recognised as a part of a single user action

    Security Monitoring of HTTP Traffic Using Extended Flows

    Get PDF
    In this paper, we present an analysis of HTTP traffic in a large-scale environment which uses network flow monitoring extended by parsing HTTP requests. In contrast to previously published analyses, we were the first to classify patterns of HTTP traffic which are relevant to network security. We described three classes of HTTP traffic which contain brute-force password attacks, connections to proxies, HTTP scanners, and web crawlers. Using the classification, we were able to detect up to 16 previously undetectable brute-force password attacks and 19 HTTP scans per day in our campus network. The activity of proxy servers and web crawlers was also observed. Symptoms of these attacks may be detected by other methods based on traditional flow monitoring, but detection using the analysis of HTTP requests is more straightforward. We, thus, confirm the added value of extended flow monitoring in comparison to the traditional method

    Tietoverkkojen valvonnan yhdenmukaistaminen

    Get PDF
    As the modern society is increasingly dependant on computer networks especially as the Internet of Things gaining popularity, a need to monitor computer networks along with associated devices increases. Additionally, the amount of cyber attacks is increasing and certain malware such as Mirai target especially network devices. In order to effectively monitor computer networks and devices, effective solutions are required for collecting and storing the information. This thesis designs and implements a novel network monitoring system. The presented system is capable of utilizing state-of-the-art network monitoring protocols and harmonizing the collected information using a common data model. This design allows effective queries and further processing on the collected information. The presented system is evaluated by comparing the system against the requirements imposed on the system, by assessing the amount of harmonized information using several protocols and by assessing the suitability of the chosen data model. Additionally, the protocol overheads of the used network monitoring protocols are evaluated. The presented system was found to fulfil the imposed requirements. Approximately 21% of the information provided by the chosen network monitoring protocols could be harmonized into the chosen data model format. The result is sufficient for effective querying and combining the information, as well as for processing the information further. The result can be improved by extending the data model and improving the information processing. Additionally, the chosen data model was shown to be suitable for the use case presented in this thesis.Yhteiskunnan ollessa jatkuvasti verkottuneempi erityisesti Esineiden Internetin kasvattaessa suosiotaan, tarve seurata sekä verkon että siihen liitettyjen laitteiden tilaa ja mahdollisia poikkeustilanteita kasvaa. Lisäksi tietoverkkohyökkäysten määrä on kasvamassa ja erinäiset haittaohjelmat kuten Mirai, ovat suunnattu erityisesti verkkolaitteita kohtaan. Jotta verkkoa ja sen laitteiden tilaa voidaan seurata, tarvitaan tehokkaita ratkaisuja tiedon keräämiseen sekä säilöntään. Tässä diplomityössä suunnitellaan ja toteutetaan verkonvalvontajärjestelmä, joka mahdollistaa moninaisten verkonvalvontaprotokollien hyödyntämisen tiedonkeräykseen. Lisäksi järjestelmä säilöö kerätyn tiedon käyttäen yhtenäistä tietomallia. Yhtenäisen tietomallin käyttö mahdollistaa tiedon tehokkaan jatkojalostamisen sekä haut tietosisältöihin. Diplomityössä esiteltävän järjestelmän ominaisuuksia arvioidaan tarkastelemalla, minkälaisia osuuksia eri verkonvalvontaprotokollien tarjoamasta informaatiosta voidaan yhdenmukaistaa tietomalliin, onko valittu tietomalli soveltuva verkonvalvontaan sekä varmistetaan esiteltävän järjestelmän täyttävän sille asetetut vaatimukset. Lisäksi työssä arvioidaan käytettävien verkonvalvontaprotokollien siirtämisen kiinteitä kustannuksia kuten otsakkeita. Työssä esitellyn järjestelmän todettiin täyttävän sille asetetut vaatimukset. Eri verkonvalvontaprotokollien tarjoamasta informaatiosta keskimäärin 21% voitiin harmonisoida tietomalliin. Saavutettu osuus on riittävä, jotta eri laitteista saatavaa informaatiota voidaan yhdistellä ja hakea tehokkaasti. Lukemaa voidaan jatkossa parantaa laajentamalla tietomallia sekä kehittämällä kerätyn informaation prosessointia. Lisäksi valittu tietomalli todettiin soveltuvaksi tämän diplomityön käyttötarkoitukseen

    Monitoring Network Flows in Containerized Environments

    Get PDF
    With the progressive implementation of digital services over virtualized infrastructures and smart devices, the inspection of network traffic becomes more challenging than ever, because of the difficulty to run legacy cybersecurity tools in novel cloud models and computing paradigms. The main issues concern i) the portability of the service across heterogeneous public and private infrastructures, that usually lack hardware and software acceleration for efficient packet processing, and ii) the difficulty to integrate monolithic appliances in modular and agile containerized environments. In this Chapter, we investigate the usage of the extended Berkeley Packet Filter (eBPF) for effective and efficient packet inspection in virtualized environments. Our preliminary implementation demonstrates that we can achieve the same performance as well-known packet inspection tools, but with far less resource consumption. This motivates further research work to extend the capability of our framework and to integrate it in Kubernetes

    Network-based HTTPS Client Identification Using SSL/TLS Fingerprinting

    Get PDF
    The growing share of encrypted network traffic complicates network traffic analysis and network forensics. In this paper, we present real-time lightweight identification of HTTPS clients based on network monitoring and SSL/TLS fingerprinting. Our experiment shows that it is possible to estimate the User-Agent of a client in HTTPS communication via the analysis of the SSL/TLS handshake. The fingerprints of SSL/TLS handshakes, including a list of supported cipher suites, differ among clients and correlate to User-Agent values from a HTTP header. We built up a dictionary of SSL/TLS cipher suite lists and HTTP User-Agents and assigned the User-Agents to the observed SSL/TLS connections to identify communicating clients. We discuss host-based and network-based methods of dictionary retrieval and estimate the quality of the data. The usability of the proposed method is demonstrated on two case studies of network forensics

    From the edge to the core : towards informed vantage point selection for internet measurement studies

    Get PDF
    Since the early days of the Internet, measurement scientists are trying to keep up with the fast-paced development of the Internet. As the Internet grew organically over time and without build-in measurability, this process requires many workarounds and due diligence. As a result, every measurement study is only as good as the data it relies on. Moreover, data quality is relative to the research question—a data set suitable to analyze one problem may be insufficient for another. This is entirely expected as the Internet is decentralized, i.e., there is no single observation point from which we can assess the complete state of the Internet. Because of that, every measurement study needs specifically selected vantage points, which fit the research question. In this thesis, we present three different vantage points across the Internet topology— from the edge to the Internet core. We discuss their specific features, suitability for different kinds of research questions, and how to work with the corresponding data. The data sets obtained at the presented vantage points allow us to conduct three different measurement studies and shed light on the following aspects: (a) The prevalence of IP source address spoofing at a large European Internet Exchange Point (IXP), (b) the propagation distance of BGP communities, an optional transitive BGP attribute used for traffic engineering, and (c) the impact of the global COVID-19 pandemic on Internet usage behavior at a large Internet Service Provider (ISP) and three IXPs.Seit den frühen Tagen des Internets versuchen Forscher im Bereich Internet Measu- rement, mit der rasanten Entwicklung des des Internets Schritt zu halten. Da das Internet im Laufe der Zeit organisch gewachsen ist und nicht mit Blick auf Messbar- keit entwickelt wurde, erfordert dieser Prozess eine Meg Workarounds und Sorgfalt. Jede Measurement Studie ist nur so gut wie die Daten, auf die sie sich stützt. Und Datenqualität ist relativ zur Forschungsfrage - ein Datensatz, der für die Analyse eines Problems geeiget ist, kann für ein anderes unzureichend sein. Dies ist durchaus zu erwarten, da das Internet dezentralisiert ist, d. h. es gibt keinen einzigen Be- obachtungspunkt, von dem aus wir den gesamten Zustand des Internets beurteilen können. Aus diesem Grund benötigt jede Measurement Studie gezielt ausgewählte Beobachtungspunkte, die zur Forschungsfrage passen. In dieser Arbeit stellen wir drei verschiedene Beobachtungspunkte vor, die sich über die gsamte Internet-Topologie erstrecken— vom Rand bis zum Kern des Internets. Wir diskutieren ihre spezifischen Eigenschaften, ihre Eignung für verschiedene Klas- sen von Forschungsfragen und den Umgang mit den entsprechenden Daten. Die an den vorgestellten Beobachtungspunkten gewonnenen Datensätze ermöglichen uns die Durchführung von drei verschiedenen Measurement Studien und damit die folgenden Aspekte zu beleuchten: (a) Die Prävalenz von IP Source Address Spoofing bei einem großen europäischen Internet Exchange Point (IXP), (b) die Ausbreitungsdistanz von BGP-Communities, ein optionales transitives BGP-Attribut, das Anwendung im Bereich Traffic-Enigneering findet sowie (c) die Auswirkungen der globalen COVID- 19-Pandemie auf das Internet-Nutzungsverhalten an einem großen Internet Service Provider (ISP) und drei IXPs
    corecore