252 research outputs found

    Netodyssey: a framework for real-time windowed analysis of network traffic

    Get PDF
    Traffic monitoring and analysis is of critical importance for managing and designing modern computer networks, and constitutes nowadays a very active research field. In most of their studies, researchers use techniques and tools that follow a statistical approach to obtain a deeper knowledge about the traffic behaviour. Network administrators also find great value in statistical analysis tools. Many of those tools return similar metrics calculated for common properties of network packets. This dissertation presents NetOdyssey, a framework for the statistical analysis of network traffic. One of the crucial points of differentiation of NetOdyssey from other analysis frameworks is the windowed analysis philosophy behind NetOdyssey. This windowed analysis philosophy allows researchers who seek for a deeper knowledge about networks, to look at traffic as if looking through a window. This approach is crucial in order to avoid the biasing effects of statistically looking at the traffic as a whole. Small fluctuations and irregularities in the network can now be analyzed, because one is always looking through window which has a fixed size: either in number of observations or in the temporal duration of those observations. NetOdyssey is able to capture live traffic from a network card or from a pre-collected trace, thus allowing for real-time analysis or delayed and repetitive analysis. NetOdyssey has a modular architecture making it possible for researchers with reduced programming capabilities to create analysis modules which can be tweaked and easily shared among those who utilize this framework. These modules were thought so that their implementation is optimized according to the windowed analysis philosophy behind NetOdyssey. This optimization makes the analysis process independent from the size of the analysis window, because it only contemplates the observations coming in and going out of this window. Besides presenting this framework, its architecture and validation, the present Dissertation also presents four different analysis modules: Average and Standard deviation, Entropy, Auto-Correlation and Hurst Parameter estimators. Each of this modules is presented and validated throughout the present dissertation.Fundação para a Ciência e a Tecnologia (FCT

    Evaluating the impact of traffic sampling in network analysis

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaThe sampling of network traffic is a very effective method in order to comprehend the behaviour and flow of a network, essential to build network management tools to control Service Level Agreements (SLAs), Quality of Service (QoS), traffic engineering, and the planning of both the capacity and the safety of the network. With the exponential rise of the amount traffic caused by the number of devices connected to the Internet growing, it gets increasingly harder and more expensive to understand the behaviour of a network through the analysis of the total volume of traffic. The use of sampling techniques, or selective analysis, which consists in the election of small number of packets in order to estimate the expected behaviour of a network, then becomes essential. Even though these techniques drastically reduce the amount of data to be analyzed, the fact that the sampling analysis tasks have to be performed in the network equipment can cause a significant impact in the performance of these equipment devices, and a reduction in the accuracy of the estimation of network state. In this dissertation project, an evaluation of the impact of selective analysis of network traffic will be explored, at a level of performance in estimating network state, and statistical properties such as self-similarity and Long-Range Dependence (LRD) that exist in original network traffic, allowing a better understanding of the behaviour of sampled network traffic.A análise seletiva do tráfego de rede é um método muito eficaz para a compreensão do comportamento e fluxo de uma rede, sendo essencial para apoiar ferramentas de gestão de tarefas tais como o cumprimento de contratos de serviço (Service Level Agreements - SLAs), o controlo da Qualidade de Serviço (QoS), a engenharia de tráfego, o planeamento de capacidade e a segurança das redes. Neste sentido, e face ao exponencial aumento da quantidade de tráfego presente causado pelo número de dispositivos com ligação à rede ser cada vez maior, torna-se cada vez mais complicado e dispendioso o entendimento do comportamento de uma rede através da análise do volume total de tráfego. A utilização de técnicas de amostragem, ou análise seletiva, que consiste na eleição de um pequeno conjunto de pacotes de forma a tentar estimar, ou calcular, o comportamento expectável de uma rede, torna-se assim essencial. Apesar de estas técnicas reduzirem bastante o volume de dados a ser analisado, o facto de as tarefas de análise seletiva terem de ser efetuadas nos equipamentos de rede pode criar um impacto significativo no desempenho dos mesmos e uma redução de acurácia na estimação do estado da rede. Nesta dissertação de mestrado será então feita uma avaliação do impacto da análise seletiva do tráfego de rede, a nível do desempenho na estimativa do estado da rede e a nível das propriedades estatísticas tais como a Long-Range Dependence (LRD) existente no tráfego original, permitindo assim entender melhor o comportamento do tráfego de rede seletivo

    Implementation of 4kUHD HEVC-content transmission

    Get PDF
    The Internet of things (IoT) has received a great deal of attention in recent years, and is still being approached with a wide range of views. At the same time, video data now accounts for over half of the internet traffic. With the current availability of beyond high definition, it is worth understanding the performance effects, especially for real-time applications. High Efficiency Video Coding (HEVC) aims to provide reduction in bandwidth utilisation while maintaining perceived video quality in comparison with its predecessor codecs. Its adoption aims to provide for areas such as television broadcast, multimedia streaming/storage, and mobile communications with significant improvements. Although there have been attempts at HEVC streaming, the literature/implementations offered do not take into consideration changes in the HEVC specifications. Beyond this point, it seems little research exists on real-time HEVC coded content live streaming. Our contribution fills this current gap in enabling compliant and real-time networked HEVC visual applications. This is done implementing a technique for real-time HEVC encapsulation in MPEG-2 Transmission Stream (MPEG-2 TS) and HTTP Live Streaming (HLS), thereby removing the need for multi-platform clients to receive and decode HEVC streams. It is taken further by evaluating the transmission of 4k UHDTV HEVC-coded content in a typical wireless environment using both computers and mobile devices, while considering well-known factors such as obstruction, interference and other unseen factors that affect the network performance and video quality. Our results suggest that 4kUHD can be streamed at 13.5 Mb/s, and can be delivered to multiple devices without loss in perceived quality

    Graph-based Temporal Analysis in Digital Forensics

    Get PDF
    Establishing a timeline as part of a digital forensics investigation is a vital part of understanding the order in which system events occurred. However, most digital forensics tools present timelines as histogram or as raw artifacts. Consequently, digital forensics examiners are forced to rely on manual, labor-intensive practices to reconstruct system events. Current digital forensics analysis tools are at their technological limit with the increasing storage and complexity of data. A graph-based timeline can present digital forensics evidence in a structure that can be immediately understood and effortlessly focused. This paper presents the Temporal Analysis Integration Management Application (TAIMA) to enhance digital forensics analysis via information visualization (infovis) techniques. TAIMA is a prototype application that provides a graph-based timeline for event reconstruction using abstraction and visualization techniques. A workflow illustration and pilot usability study provided evidence that TAIMA assisted digital forensics specialists in identifying key system events during digital forensics analysis

    Network distributed 3D video quality monitoring system

    Get PDF
    This project description presents a research and development work whose primary goal was the design and implementation of an Internet Protocol (IP) network distributed video quality assessment tool. Even though the system was designed to monitor H.264 three-dimensional (3D) stereo video quality it is also applicable to di erent formats of 3D video (such as texture plus depth) and can use di erent video quality assessment models making it easily customizable and adaptable to varying conditions and transmission scenarios. The system uses packet level data collection done by a set of network probes located at convenient network points, that carry out packet monitoring, inspection and analysis to obtain information about 3D video packets passing through the probe's locations. The information gathered is sent to a central server for further processing including 3D video quality estimation based on packet level information. Firstly an overview of current 3D video standards, their evolution and features is presented, strongly focused on H.264/AVC and HEVC. Then follows a description of video quality assessment metrics, describing in more detail the quality estimator used in the work. Video transport methods over the Internet Protocol are also explained in detail as thorough knowledge of video packetization schemes is important to understand the information retrieval and parsing performed at the front stage of the system, the probes. After those introductory themes are addressed, a general system architecture is shown, explaining all its components and how they interact with each other. The development steps of each of the components are then thoroughly described. In addition to the main project, a 3D video streamer was created to be used in the implementation tests of the system. This streamer was purposely built for the present work as currently available free-domain streamers do not support 3D video streaming. The overall result is a system that can be deployed in any IP network and is exible enough to help in future video quality assessment research, since it can be used as a testing platform to validate any proposed new quality metrics, serve as a network monitoring tool for video transmission or help to understand the impact that some network characteristics may have on video quality

    GPU Accelerated protocol analysis for large and long-term traffic traces

    Get PDF
    This thesis describes the design and implementation of GPF+, a complete general packet classification system developed using Nvidia CUDA for Compute Capability 3.5+ GPUs. This system was developed with the aim of accelerating the analysis of arbitrary network protocols within network traffic traces using inexpensive, massively parallel commodity hardware. GPF+ and its supporting components are specifically intended to support the processing of large, long-term network packet traces such as those produced by network telescopes, which are currently difficult and time consuming to analyse. The GPF+ classifier is based on prior research in the field, which produced a prototype classifier called GPF, targeted at Compute Capability 1.3 GPUs. GPF+ greatly extends the GPF model, improving runtime flexibility and scalability, whilst maintaining high execution efficiency. GPF+ incorporates a compact, lightweight registerbased state machine that supports massively-parallel, multi-match filter predicate evaluation, as well as efficient arbitrary field extraction. GPF+ tracks packet composition during execution, and adjusts processing at runtime to avoid redundant memory transactions and unnecessary computation through warp-voting. GPF+ additionally incorporates a 128-bit in-thread cache, accelerated through register shuffling, to accelerate access to packet data in slow GPU global memory. GPF+ uses a high-level DSL to simplify protocol and filter creation, whilst better facilitating protocol reuse. The system is supported by a pipeline of multi-threaded high-performance host components, which communicate asynchronously through 0MQ messaging middleware to buffer, index, and dispatch packet data on the host system. The system was evaluated using high-end Kepler (Nvidia GTX Titan) and entry level Maxwell (Nvidia GTX 750) GPUs. The results of this evaluation showed high system performance, limited only by device side IO (600MBps) in all tests. GPF+ maintained high occupancy and device utilisation in all tests, without significant serialisation, and showed improved scaling to more complex filter sets. Results were used to visualise captures of up to 160 GB in seconds, and to extract and pre-filter captures small enough to be easily analysed in applications such as Wireshark

    Interactive visualization of event logs for cybersecurity

    Get PDF
    Hidden cyber threats revealed with new visualization software Eventpa
    • …
    corecore