380 research outputs found

    Congestion Prediction in Internet of Things Network using Temporal Convolutional Network A Centralized Approach

    Get PDF
    The unprecedented ballooning of network traffic flow, specifically, Internet of Things (IoT) network traffic, has big stressed of congestion on todays Internet. Non-recurring network traffic flow may be caused by temporary disruptions, such as packet drop, poor quality of services, delay, etc. Hence, the network traffic flow estimation is important in IoT networks to predict congestion. As the data in IoT networks is collected from a large number of diversified devices which have unlike format of data and also manifest complex correlations, so the generated data is heterogeneous and nonlinear in nature. Conventional machine learning approaches unable to deal with nonlinear datasets and suffer from misclassification of real network traffic due to overfitting. Therefore, it also becomes really hard for conventional machine learning tools like shallow neural networks to predict the congestion accurately. Accuracy of congestion prediction algorithms play an important role to control the congestion by regulating the send rate of the source. Various deeplearning methods (LSTM, CNN, GRU, etc.) are considered in designing network traffic flow predictors, which have shown promising results. In this work, we propose a novel congestion predictor for IoT, that uses Temporal Convolutional Network (TCN). Furthermore, we use Taguchi method to optimize the TCN model that reduces the number of runs of the experiments. We compare TCN with other four deep learning-based models concerning Mean Absolute Error (MAE) and Mean Relative Error (MRE). The experimental results show that TCN based deep learning framework achieves improved performance with 95.52% accuracy in predicting network congestion. Further, we design the Home IoT network testbed to capture the real network traffic flows as no standard dataset is available

    Performance Analytics of Cloud Networks

    Get PDF
    As the world becomes more inter-connected and dependent on the Internet, networks become ever more pervasive, and the stresses placed upon them more demanding. Similarly, the expectations of networks to maintain a high level of performance have also increased. Network performance is highly important to any business that operates online, depends on web traffic, runs any part of their infrastructure in a cloud environment, or even hosts their own network infrastructure. Depending upon the exact nature of a network, whether it be local or wide-area, 10 or 100 Gigabit, it will have distinct performance characteristics and it is important for a business or individual operating on the network to understand those performance characteristics and how they affect operations. To better understand our networks, it is necessary that we test them to measure their performance capabilities and track these metrics over time. In our work, we provide an in-depth analysis of how best to run cloud benchmarks to increase our network intelligence and how we can use the results of those benchmarks to predict future performance and identify performance anomalies. To achieve this, we explain how to effectively run cloud benchmarks and propose a scheduling algorithm for running large numbers of cloud benchmarks daily. We then use the performance data gathered from this method to conduct a thorough analysis of the performance characteristics of a cloud network, train neural networks to forecast future throughput based on historical results and detect performance anomalies as they occur

    IoT Device Identification Using Device Fingerprint and Deep Learning

    Get PDF
    The foundation of security in IoT devices lies in their identity. However, traditional identification parameters, such as MAC address, IP address, and IMEI, are vulnerable to sniffing and spoofing attacks. To address this issue, this paper proposes a novel approach using device fingerprinting and deep learning for device identification. Device fingerprinting is generated by analyzing inter-arrival time (IAT), round trip time (RTT), or IAT/RTT outliers of packets used for communication in networks. We trained deep learning models, namely convolutional neural network (CNN) and CNN + LSTM (long short-term memory), using device fingerprints generated from TCP, UDP, ICMP packet types, ICMP packet type, and their outliers. Our results show that the CNN model performs better than the CNN + LSTM model. Specifically, the CNN model achieves an accuracy of 0.97 using the IAT device fingerprint of ICMP packet type, and 0.9648 using the IAT outlier device fingerprint of ICMP packet type on a publicly available dataset from the crawdad repository

    Network traffic data analysis

    Get PDF
    The desire to conceptualize network traffic in a prevailing communication network is a facet for many types of network research studies. In this research, real traffic traces collected over trans-Pacific backbone links (the MAWI repository, providing publicly available anonymized traces) are analyzed to study the underlying traffic patterns. All data analysis and visualization is carried out using Matlab (Matlab is a trademark of The Mathworks, Inc.). At packet level, we first measure parameters such as distribution of packet lengths, distribution of protocol types, and then fit following analytical models. Next, the concept of flow is introduced and flow based analysis is studied. We consider flow related parameters such as top ports seen, duration of the flow, distribution of flow lengths, and number of flows with different timeout values and provide analytical models to fit the flow lengths. Further, we study the amount of data flowing between source-destination pairs. Finally, we focus on TCP-specific aspects of captured traces such as retransmissions and packet round-trip times. From the results obtained, we infer the Zipf-type nature of distribution for number of flows, heavy-tailness of flow sizes and the contribution of well-known ports at packet and flow level. Our study helps a network analyst to farther the knowledge and helps optimize the network resources, while performing efficient traffic engineering

    QoE-Based Low-Delay Live Streaming Using Throughput Predictions

    Full text link
    Recently, HTTP-based adaptive streaming has become the de facto standard for video streaming over the Internet. It allows clients to dynamically adapt media characteristics to network conditions in order to ensure a high quality of experience, that is, minimize playback interruptions, while maximizing video quality at a reasonable level of quality changes. In the case of live streaming, this task becomes particularly challenging due to the latency constraints. The challenge further increases if a client uses a wireless network, where the throughput is subject to considerable fluctuations. Consequently, live streams often exhibit latencies of up to 30 seconds. In the present work, we introduce an adaptation algorithm for HTTP-based live streaming called LOLYPOP (Low-Latency Prediction-Based Adaptation) that is designed to operate with a transport latency of few seconds. To reach this goal, LOLYPOP leverages TCP throughput predictions on multiple time scales, from 1 to 10 seconds, along with an estimate of the prediction error distribution. In addition to satisfying the latency constraint, the algorithm heuristically maximizes the quality of experience by maximizing the average video quality as a function of the number of skipped segments and quality transitions. In order to select an efficient prediction method, we studied the performance of several time series prediction methods in IEEE 802.11 wireless access networks. We evaluated LOLYPOP under a large set of experimental conditions limiting the transport latency to 3 seconds, against a state-of-the-art adaptation algorithm from the literature, called FESTIVE. We observed that the average video quality is by up to a factor of 3 higher than with FESTIVE. We also observed that LOLYPOP is able to reach a broader region in the quality of experience space, and thus it is better adjustable to the user profile or service provider requirements.Comment: Technical Report TKN-16-001, Telecommunication Networks Group, Technische Universitaet Berlin. This TR updated TR TKN-15-00
    corecore