2 research outputs found

    Empirical analysis of traffic to establish a profiled flow termination timeout

    No full text
    The exponential increase of bandwidth on the Internet has made the online traffic classification a highly exigent task. All the operations in the classification process must be efficiently implemented in order to deal with an enormous amount of data. A key point in this process is the selection of a flow termination, a decision that has important consequences for several traffic classification techniques (e.g., DPI-based, Machine Learning-based). For instance, properly expiring the flows reduces the amount of memory necessary and avoids erroneous computation of flow features. In addition, the heterogeneous behaviour of the applications on the Internet have dismissed the traditional techniques to determine the flow termination (i.e., TCP 3/4-way handshake, TCP timeout). In this paper, we first perform a comprehensive study of the flow termination by application groups. Results confirm that traditional techniques are no longer sufficient to determine the flow termination (i.e., <;50% finish with TCP handshake for some groups). In order to address this new scenario we propose a profiled (i.e., by application group) flow termination timeout. This solution has been evaluated in a well-known commercial DPI tool (the Ipoque's PACE engine) achieving a drastic reduction of memory, while keeping the same computation cost and classification accuracy. In order to obtain representative results, two completely different traces have been analysed, one from the core network of a large ISP and another from the edge link of a mobile operator.Peer Reviewe

    Network traffic classification : from theory to practice

    Get PDF
    Since its inception until today, the Internet has been in constant transformation. The analysis and monitoring of data networks try to shed some light on this huge black box of interconnected computers. In particular, the classification of the network traffic has become crucial for understanding the Internet. During the last years, the research community has proposed many solutions to accurately identify and classify the network traffic. However, the continuous evolution of Internet applications and their techniques to avoid detection make their identification a very challenging task, which is far from being completely solved. This thesis addresses the network traffic classification problem from a more practical point of view, filling the gap between the real-world requirements from the network industry, and the research carried out. The first block of this thesis aims to facilitate the deployment of existing techniques in production networks. To achieve this goal, we study the viability of using NetFlow as input in our classification technique, a monitoring protocol already implemented in most routers. Since the application of packet sampling has become almost mandatory in large networks, we also study its impact on the classification and propose a method to improve the accuracy in this scenario. Our results show that it is possible to achieve high accuracy with both sampled and unsampled NetFlow data, despite the limited information provided by NetFlow. Once the classification solution is deployed it is important to maintain its accuracy over time. Current network traffic classification techniques have to be regularly updated to adapt them to traffic changes. The second block of this thesis focuses on this issue with the goal of automatically maintaining the classification solution without human intervention. Using the knowledge of the first block, we propose a classification solution that combines several techniques only using Sampled NetFlow as input for the classification. Then, we show that classification models suffer from temporal and spatial obsolescence and, therefore, we design an autonomic retraining system that is able to automatically update the models and keep the classifier accurate along time. Going one step further, we introduce next the use of stream-based Machine Learning techniques for network traffic classification. In particular, we propose a classification solution based on Hoeffding Adaptive Trees. Apart from the features of stream-based techniques (i.e., process an instance at a time and inspect it only once, with a predefined amount of memory and a bounded amount of time), our technique is able to automatically adapt to the changes in the traffic by using only NetFlow data as input for the classification. The third block of this thesis aims to be a first step towards the impartial validation of state-of-the-art classification techniques. The wide range of techniques, datasets, and ground-truth generators make the comparison of different traffic classifiers a very difficult task. To achieve this goal we evaluate the reliability of different Deep Packet Inspection-based techniques (DPI) commonly used in the literature for ground-truth generation. The results we obtain show that some well-known DPI techniques present several limitations that make them not recommendable as a ground-truth generator in their current state. In addition, we publish some of the datasets used in our evaluations to address the lack of publicly available datasets and make the comparison and validation of existing techniques easier
    corecore