26 research outputs found

    iTeleScope: Intelligent Video Telemetry and Classification in Real-Time using Software Defined Networking

    Full text link
    Video continues to dominate network traffic, yet operators today have poor visibility into the number, duration, and resolutions of the video streams traversing their domain. Current approaches are inaccurate, expensive, or unscalable, as they rely on statistical sampling, middle-box hardware, or packet inspection software. We present {\em iTelescope}, the first intelligent, inexpensive, and scalable SDN-based solution for identifying and classifying video flows in real-time. Our solution is novel in combining dynamic flow rules with telemetry and machine learning, and is built on commodity OpenFlow switches and open-source software. We develop a fully functional system, train it in the lab using multiple machine learning algorithms, and validate its performance to show over 95\% accuracy in identifying and classifying video streams from many providers including Youtube and Netflix. Lastly, we conduct tests to demonstrate its scalability to tens of thousands of concurrent streams, and deploy it live on a campus network serving several hundred real users. Our system gives unprecedented fine-grained real-time visibility of video streaming performance to operators of enterprise and carrier networks at very low cost.Comment: 12 pages, 16 figure

    Queue Dynamics With Window Flow Control

    Get PDF
    This paper develops a new model that describes the queueing process of a communication network when data sources use window flow control. The model takes into account the burstiness in sub-round-trip time (RTT) timescales and the instantaneous rate differences of a flow at different links. It is generic and independent of actual source flow control algorithms. Basic properties of the model and its relation to existing work are discussed. In particular, for a general network with multiple links, it is demonstrated that spatial interaction of oscillations allows queue instability to occur even when all flows have the same RTTs and maintain constant windows. The model is used to study the dynamics of delay-based congestion control algorithms. It is found that the ratios of RTTs are critical to the stability of such systems, and previously unknown modes of instability are identified. Packet-level simulations and testbed measurements are provided to verify the model and its predictions

    Measuring Performance of Web Protocol with Updated Transport Layer Techniques for Faster Web Browsing

    Get PDF
    The author acknowledges the Electronics Research Group of University of Aberdeen, UK, for all the support in conducting these experiments. This research was completed as a part of the University of Aberdeen, dot.rural project. (EP/G066051/1).Publisher PD

    The Effect of the Buffer Size in QoS for Multimedia and bursty Traffic: When an Upgrade Becomes a Downgrade

    Get PDF
    This work presents an analysis of the buffer features of an access router, especially the size, the impact on delay and the packet loss rate. In particular, we study how these features can affect the Quality of Service (QoS) of multimedia applications when generating traffic bursts in local networks. First, we show how in a typical SME (Small and Medium Enterprise) network in which several multimedia flows (VoIP, videoconferencing and video surveillance) share access, the upgrade of the bandwidth of the internal network may cause the appearance of a significant amount of packet loss caused by buffer overflow. Secondly, the study shows that the bursty nature of the traffic in some applications traffic (video surveillance) may impair their QoS and that of other services (VoIP and videoconferencing), especially when a certain number of bursts overlap. Various tests have been developed with the aim of characterizing the problems that may appear when network capacity is increased in these scenarios. In some cases, especially when applications generating bursty traffic are present, increasing the network speed may lead to a deterioration in the quality. It has been found that the cause of this quality degradation is buffer overflow, which depends on the bandwidth relationship between the access and the internal networks. Besides, it has been necessary to describe the packet loss distribution by means of a histogram since, although most of the communications present good QoS results, a few of them have worse outcomes. Finally, in order to complete the study we present the MOS results for VoIP calculated from the delay and packet loss rate

    A Longitudinal Study of Small-Time Scaling Behavior of Internet Traffic

    Full text link

    Packet Loss Burstiness: Measurements and Implications for Distributed Applications

    Get PDF
    Many modern massively distributed systems deploy thousands of nodes to cooperate on a computation task. Network congestions occur in these systems. Most applications rely on congestion control protocols such as TCP to protect the systems from congestion collapse. Most TCP congestion control algorithms use packet loss as signal to detect congestion. In this paper, we study the packet loss process in sub-round-trip-time (sub-RTT) timescale and its impact on the loss-based congestion control algorithms. Our study suggests that the packet loss in sub-RTT timescale is very bursty. This burstiness leads to two effects. First, the sub-RTT burstiness in packet loss process leads to complicated interactions between different loss-based algorithms. Second, the sub-RTT burstiness in packet loss process makes the latency of data transfers under TCP hard to predict. Our results suggest that the design of a distributed system has to seriously consider the nature of packet loss process and carefully select the congestion control algorithms best suited for the distributed computation environments

    Modeling memory effects in activity-driven networks

    Get PDF
    Activity-driven networks (ADNs) have recently emerged as a powerful paradigm to study the temporal evolution of stochastic networked systems. All the information on the time-varying nature of the system is encapsulated into a constant activity parameter, which represents the propensity to generate connections. This formulation has enabled the scientific community to perform effective analytical studies on temporal networks. However, the hypothesis that the whole dynamics of the system is summarized by constant parameters might be excessively restrictive. Empirical studies suggest that activity evolves in time, intertwined with the system evolution, causing burstiness and clustering phenomena. In this paper, we propose a novel model for temporal networks, in which a self-excitement mechanism governs the temporal evolution of the activity, linking it to the evolution of the networked system. We investigate the effect of self-excitement on the epidemic inception by comparing the epidemic threshold of a Susceptible-Infected-Susceptible model in the presence and in the absence of the self-excitement mechanism. Our results suggest that the temporal nature of the activity favors the epidemic inception. Hence, neglecting self-excitement mechanisms might lead to harmful underestimation of the risk of an epidemic outbreak. Extensive numerical simulations are presented to support and extend our analysis, exploring parameter heterogeneities and noise, transient dynamics, and immunization processes. Our results constitute a first, necessary step toward a theory of ADNs that accounts for memory effects in the network evolution
    corecore