271 research outputs found

    Network Digest analysis by means of association rules

    Get PDF
    The continuous growth in connection speed allows huge amounts of data to be transferred through a network. An important issue in this context is network traffic analysis to profile communications and detect security threats. Association rule extraction is a widely used exploratory technique which has been exploited in different contexts (e.g., network traffic characterization). However, to discover (potentially relevant) knowledge a very low support threshold needs to be enforced hence generating a large number of unmanageable rules. To address this issue in network traffic analysis, an efficient technique to reduce traffic volume is needed. This paper presents a NEtwork Digest framework, which performs network traffic analysis by means of data mining techniques to characterize traffic data and detect anomalies. NED exploits continuous queries to efficiently perform realtime aggregation of captured network data and supports filtering operations to further reduce traffic volume focusing on relevant data. Furthermore, NED provides an efficient algorithm to perform refinement analysis by means of association rules to discover traffic features. Extracted rules allow traffic data characterization in terms of correlation and recurrence of feature patterns. Preliminary experimental results performed on different network dumps showed the efficiency and effectiveness of the NED framework to characterize traffic data

    Control of transport dynamics in overlay networks

    Get PDF
    Transport control is an important factor in the performance of Internet protocols, particularly in the next generation network applications involving computational steering, interactive visualization, instrument control, and transfer of large data sets. The widely deployed Transport Control Protocol is inadequate for these tasks due to its performance drawbacks. The purpose of this dissertation is to conduct a rigorous analytical study on the design and performance of transport protocols, and systematically develop a new class of protocols to overcome the limitations of current methods. Various sources of randomness exist in network performance measurements due to the stochastic nature of network traffic. We propose a new class of transport protocols that explicitly accounts for the randomness based on dynamic stochastic approximation methods. These protocols use congestion window and idle time to dynamically control the source rate to achieve transport objectives. We conduct statistical analyses to determine the main effects of these two control parameters and their interaction effects. The application of stochastic approximation methods enables us to show the analytical stability of the transport protocols and avoid pre-selecting the flow and congestion control parameters. These new protocols are successfully applied to transport control for both goodput stabilization and maximization. The experimental results show the superior performance compared to current methods particularly for Internet applications. To effectively deploy these protocols over the Internet, we develop an overlay network, which resides at the application level to provide data transmission service using User Datagram Protocol. The overlay network, together with the new protocols based on User Datagram Protocol, provides an effective environment for implementing transport control using application-level modules. We also study problems in overlay networks such as path bandwidth estimation and multiple quickest path computation. In wireless networks, most packet losses are caused by physical signal losses and do not necessarily indicate network congestion. Furthermore, the physical link connectivity in ad-hoc networks deployed in unstructured areas is unpredictable. We develop the Connectivity-Through-Time protocols that exploit the node movements to deliver data under dynamic connectivity. We integrate this protocol into overlay networks and present experimental results using network to support a team of mobile robots

    DSL-based triple-play services

    Get PDF
    This research examines the triple play service based on the ADSL technology. The voice over IP will be checked and combined with the internet data by two monitoring programs in order to examine the performance that this service offers and then will be compared with the usual method of internet connection.This research examines the triple play service based on the ADSL technology. The voice over IP will be checked and combined with the internet data by two monitoring programs in order to examine the performance that this service offers and then will be compared with the usual method of internet connection.

    Design, Implementation, and Evaluation of Network Monitoring Tasks with the TelegraphCQ Data Stream Management System : Master's Thesis

    Get PDF
    Data stream management systems (DSMSs) provide a new and alternative way to perceive and analyze data streams. Similar to the database management systems (DBMSs), the DSMSs use a declarative query language to handle data. One of the main differences is that the DSMSs obtain the data from streaming sources, for example a local area network (LAN), instead of a database. Such an approach opens up for a set of tasks that can be described using e.g. SQL-like queries. To begin with, we discuss the networking application and introduce a set of requirements that might be useful for DSMSs in general. Some of these requirements are further discussed as we describe the issues in DSMSs. This thesis focuses on one particular DSMS, TelegraphCQ, and we give a thorough description and discussion of its features. We have designed and implemented a set of tasks that may be of value for the network monitoring application as described in this thesis. We discuss these tasks, investigate their qualities and propose solutions on how to implement them in the declarative language provided by TelegraphCQ. Finally, we run a performance analysis of some of the tasks to see how TelegraphCQ manages to handle data streams at varying loads. We focus on two metrics; relative throughput to the number of packets received, and accuracy of the results. These metrics are very important with respect to the reliability and applicability of TelegraphCQ. In this context, we implement an experiment setup for network monitoring with DSMSs, such that the results can be easily re-tested and verified. We show that TelegraphCQ only manages a network load of approximately 2.5 Mbits/s before it starts dropping packets. We end the discussion by evaluating TelegraphCQ's support for the requirements described in the beginning of the thesis, and point out some of the requirements TelegraphCQ does not support. We discuss the results from the performance evaluation and conclude that the accuracy is satisfying. The conclusion is that, due to the low relative throughput, TelegraphCQ is not suited for network traffic monitoring at higher network loads

    Ultra-reliable Low-latency, Energy-efficient and Computing-centric Software Data Plane for Network Softwarization

    Get PDF
    Network softwarization plays a significantly important role in the development and deployment of the latest communication system for 5G and beyond. A more flexible and intelligent network architecture can be enabled to provide support for agile network management, rapid launch of innovative network services with much reduction in Capital Expense (CAPEX) and Operating Expense (OPEX). Despite these benefits, 5G system also raises unprecedented challenges as emerging machine-to-machine and human-to-machine communication use cases require Ultra-Reliable Low Latency Communication (URLLC). According to empirical measurements performed by the author of this dissertation on a practical testbed, State of the Art (STOA) technologies and systems are not able to achieve the one millisecond end-to-end latency requirement of the 5G standard on Commercial Off-The-Shelf (COTS) servers. This dissertation performs a comprehensive introduction to three innovative approaches that can be used to improve different aspects of the current software-driven network data plane. All three approaches are carefully designed, professionally implemented and rigorously evaluated. According to the measurement results, these novel approaches put forward the research in the design and implementation of ultra-reliable low-latency, energy-efficient and computing-first software data plane for 5G communication system and beyond

    A Security Model and Fully Verified Implementation for the IETF QUIC Record Layer

    Get PDF
    We investigate the security of the QUIC record layer, as standardized by the IETF in draft version 30. This version features major differences compared to Google\u27s original protocol and prior IETF drafts. We model packet and header encryption, which uses a custom construction for privacy. To capture its goals, we propose a security definition for authenticated encryption with semi-implicit nonces. We show that QUIC uses an instance of a generic construction parameterized by a standard AEAD-secure scheme and a PRF-secure cipher. We formalize and verify the security of this construction in F*. The proof uncovers interesting limitations of nonce confidentiality, due to the malleability of short headers and the ability to choose the number of least significant bits included in the packet counter. We propose improvements that simplify the proof and increase robustness against strong attacker models. In addition to the verified security model, we also give concrete functional specification for the record layer, and prove that it satisfies important functionality properties (such as successful decryption of encrypted packets) after fixing more errors in the draft. We then provide a high-performance implementation of the record layer that we prove to be memory safe, correct with respect to our concrete specification (inheriting its functional correctness properties), and secure with respect to our verified model. To evaluate this component, we develop a provably-safe implementation of the rest of the QUIC protocol. Our record layer achieves nearly 2 GB/s throughput, and our QUIC implementation\u27s performance is within 21% of an unverified baseline

    Practical Encryption Gateways to Integrate Legacy Industrial Machinery

    Get PDF
    Future industrial networks will consist of a mixture of old and new components, due to the very long life-cycles of industrial machines on the one hand and the need to change in the face of trends like Industry 4.0 or the industrial Internet of things on the other. These networks will be very heterogeneous and will serve legacy as well as new use cases in parallel. This will result in an increased demand for network security and precisely within this domain, this thesis tries to answer one specific question: how to make it possible for legacy industrial machines to run securely in those future heterogeneous industrial networks. The need for such a solution arises from the fact, that legacy machines are very outdated and hence vulnerable systems, when assessing them from an IT security standpoint. For various reasons, they cannot be easily replaced or upgraded and with the opening up of industrial networks to the Internet, they become prime attack targets. The only way to provide security for them, is by protecting their network traffic. The concept of encryption gateways forms the basis of our solution. These are special network devices, that are put between the legacy machine and the network. The gateways encrypt data traffic from the machine before it is put on the network and decrypt traffic coming from the network accordingly. This results in a separation of the machine from the network by virtue of only decrypting and passing through traffic from other authenticated gateways. In effect, they protect communication data in transit and shield the legacy machines from potential attackers within the rest of the network, while at the same time retaining their functionality. Additionally, through the specific placement of gateways inside the network, fine-grained security policies become possible. This approach can reduce the attack surface of the industrial network as a whole considerably. As a concept, this idea is straight forward and not new. Yet, the devil is in the details and no solution specifically tailored to the needs of the industrial environment and its legacy components existed prior to this work. Therefore, we present in this thesis concrete building blocks in the direction of a generally applicable encryption gateway solution that allows to securely integrate legacy industrial machinery and respects industrial requirements. This not only entails works in the direction of network security, but also includes works in the direction of guaranteeing the availability of the communication links that are protected by the gateways, works to simplify the usability of the gateways as well as the management of industrial data flows by the gateways
    corecore