202 research outputs found

    モバイルネットワークにおけるTCPスループット予測と適応レート制御に関する研究

    Get PDF
    早大学位記番号:新8115早稲田大

    On the constancy of internet path properties

    Get PDF

    System identification of computer networks with random service

    Get PDF
    [no abstract

    Detecting network attacks using high-resolution time series

    Get PDF
    Research in the detection of cyber-attacks has sky-rocketed in the recent past. However, there remains a striking gap between usage of the proposed algorithms in academic research versus industrial applications. Leading researchers have argued that efforts toward the understanding of proposed detectors are lacking. By digging deeper into their inner workings and critically evaluating their underlying assumptions, better detectors may be built. The aim of this thesis is therefore to provide an underlying theory for understanding a single class of detection algorithms, in particular, anomaly-based network intrusion detection algorithms that utilise high-resolution time series data. A framework is proposed to deconstruct the algorithms into their constituent components (windows, representations, and deviations). The framework is applied to a class of algorithms, allowing to construct a “space” of algorithms spanned by five variables: windowing procedure, information availability, single- or multi-aggregated representation, marginal distribution model, and deviation. The detection of a simple class of Denial-of-Service (DoS) attacks is modelled as a detection theoretic problem. It is shown that the effect of incomplete information is greatest when detecting low-intensity attacks (less than 5%), however, the effect slowly decays as the attack intensity increases. Next, the representation and deviation components are jointly analysed via a proposed experimental procedure using network traffic from two publicly available datasets: the Measurement and Analysis on the WIDE Internet (MAWI) archive, and the Booters dataset. The experimental analysis shows that varying the representation (single- versus multi-aggregated) has little effect on detection accuracy, and that the likelihood deviation is superior to the L2 distance deviation, although the difference is negligible for large-intensity attacks (approximately 80%)

    From statistical- to machine learning-based network traffic prediction

    Get PDF
    Nowadays, due to the exponential and continuous expansion of new paradigms such as Internet of Things (IoT), Internet of Vehicles (IoV) and 6G, the world is witnessing a tremendous and sharp increase of network traffic. In such large-scale, heterogeneous, and complex networks, the volume of transferred data, as big data, is considered a challenge causing different networking inefficiencies. To overcome these challenges, various techniques are introduced to monitor the performance of networks, called Network Traffic Monitoring and Analysis (NTMA). Network Traffic Prediction (NTP) is a significant subfield of NTMA which is mainly focused on predicting the future of network load and its behavior. NTP techniques can generally be realized in two ways, that is, statistical- and Machine Learning (ML)-based. In this paper, we provide a study on existing NTP techniques through reviewing, investigating, and classifying the recent relevant works conducted in this field. Additionally, we discuss the challenges and future directions of NTP showing that how ML and statistical techniques can be used to solve challenges of NTP.publishedVersio

    The Efficiency of Transport Protocols in Current and Future Mobile Networks

    Get PDF
    Legacy transport protocols like TCP and its variants have been mainly designed for static and fixed networks and perform inefficiently over cellular networks. Although these protocols have seen major improvements during the past years, they still suffer from the high channel variability of cellular networks. Thomas Pötsch investigates the channel properties of cellular networks and analyzes the effects that cause performance degradation of transport protocols. Inspired by the findings, a novel delay-based congestion control protocol called Verus is proposed and evaluated across a variety of network scenarios. Further, the author develops a stochastic two-dimensional discrete-time Markov modeling approach that dramatically simplifies the understanding of delay-based congestion control protocols

    PREDICTING INTERNET TRAFFIC BURSTS USING EXTREME VALUE THEORY

    Get PDF
    Computer networks play an important role in today’s organization and people life. These interconnected devices share a common medium and they tend to compete for it. Quality of Service (QoS) comes into play as to define what level of services users get. Accurately defining the QoS metrics is thus important. Bursts and serious deteriorations are omnipresent in Internet and considered as an important aspects of it. This thesis examines bursts and serious deteriorations in Internet traffic and applies Extreme Value Theory (EVT) to their prediction and modelling. EVT itself is a field of statistics that has been in application in fields like hydrology and finance, with only a recent introduction to the field of telecommunications. Model fitting is based on real traces from Belcore laboratory along with some simulated traces based on fractional Gaussian noise and linear fractional alpha stable motion. QoS traces from University of Napoli are also used in the prediction stage. Three methods from EVT are successfully used for the bursts prediction problem. They are Block Maxima (BM) method, Peaks Over Threshold (POT) method, and RLargest Order Statistics (RLOS) method. Bursts in internet traffic are predicted using the above three methods. A clear methodology was developed for the bursts prediction problem. New metrics for QoS are suggested based on Return Level and Return Period. Thus, robust QoS metrics can be defined. In turn, a superior QoS will be obtained that would support mission critical applications

    Performance modelling with adaptive hidden Markov models and discriminatory processor sharing queues

    Get PDF
    In modern computer systems, workload varies at different times and locations. It is important to model the performance of such systems via workload models that are both representative and efficient. For example, model-generated workloads represent realistic system behaviour, especially during peak times, when it is crucial to predict and address performance bottlenecks. In this thesis, we model performance, namely throughput and delay, using adaptive models and discrete queues. Hidden Markov models (HMMs) parsimoniously capture the correlation and burstiness of workloads with spatiotemporal characteristics. By adapting the batch training of standard HMMs to incremental learning, online HMMs act as benchmarks on workloads obtained from live systems (i.e. storage systems and financial markets) and reduce time complexity of the Baum-Welch algorithm. Similarly, by extending HMM capabilities to train on multiple traces simultaneously it follows that workloads of different types are modelled in parallel by a multi-input HMM. Typically, the HMM-generated traces verify the throughput and burstiness of the real data. Applications of adaptive HMMs include predicting user behaviour in social networks and performance-energy measurements in smartphone applications. Equally important is measuring system delay through response times. For example, workloads such as Internet traffic arriving at routers are affected by queueing delays. To meet quality of service needs, queueing delays must be minimised and, hence, it is important to model and predict such queueing delays in an efficient and cost-effective manner. Therefore, we propose a class of discrete, processor-sharing queues for approximating queueing delay as response time distributions, which represent service level agreements at specific spatiotemporal levels. We adapt discrete queues to model job arrivals with distributions given by a Markov-modulated Poisson process (MMPP) and served under discriminatory processor-sharing scheduling. Further, we propose a dynamic strategy of service allocation to minimise delays in UDP traffic flows whilst maximising a utility function.Open Acces
    corecore