2,458 research outputs found

    iTeleScope: Intelligent Video Telemetry and Classification in Real-Time using Software Defined Networking

    Full text link
    Video continues to dominate network traffic, yet operators today have poor visibility into the number, duration, and resolutions of the video streams traversing their domain. Current approaches are inaccurate, expensive, or unscalable, as they rely on statistical sampling, middle-box hardware, or packet inspection software. We present {\em iTelescope}, the first intelligent, inexpensive, and scalable SDN-based solution for identifying and classifying video flows in real-time. Our solution is novel in combining dynamic flow rules with telemetry and machine learning, and is built on commodity OpenFlow switches and open-source software. We develop a fully functional system, train it in the lab using multiple machine learning algorithms, and validate its performance to show over 95\% accuracy in identifying and classifying video streams from many providers including Youtube and Netflix. Lastly, we conduct tests to demonstrate its scalability to tens of thousands of concurrent streams, and deploy it live on a campus network serving several hundred real users. Our system gives unprecedented fine-grained real-time visibility of video streaming performance to operators of enterprise and carrier networks at very low cost.Comment: 12 pages, 16 figure

    An adaptable fuzzy-based model for predicting link quality in robot networks.

    Get PDF
    It is often essential for robots to maintain wireless connectivity with other systems so that commands, sensor data, and other situational information can be exchanged. Unfortunately, maintaining sufficient connection quality between these systems can be problematic. Robot mobility, combined with the attenuation and rapid dynamics associated with radio wave propagation, can cause frequent link quality (LQ) issues such as degraded throughput, temporary disconnects, or even link failure. In order to proactively mitigate such problems, robots must possess the capability, at the application layer, to gauge the quality of their wireless connections. However, many of the existing approaches lack adaptability or the framework necessary to rapidly build and sustain an accurate LQ prediction model. The primary contribution of this dissertation is the introduction of a novel way of blending machine learning with fuzzy logic so that an adaptable, yet intuitive LQ prediction model can be formed. Another significant contribution includes the evaluation of a unique active and incremental learning framework for quickly constructing and maintaining prediction models in robot networks with minimal sampling overhead

    Information Theory Filters for Wavelet Packet Coefficient Selection with Application to Corrosion Type Identification from Acoustic Emission Signals

    Get PDF
    The damage caused by corrosion in chemical process installations can lead to unexpected plant shutdowns and the leakage of potentially toxic chemicals into the environment. When subjected to corrosion, structural changes in the material occur, leading to energy releases as acoustic waves. This acoustic activity can in turn be used for corrosion monitoring, and even for predicting the type of corrosion. Here we apply wavelet packet decomposition to extract features from acoustic emission signals. We then use the extracted wavelet packet coefficients for distinguishing between the most important types of corrosion processes in the chemical process industry: uniform corrosion, pitting and stress corrosion cracking. The local discriminant basis selection algorithm can be considered as a standard for the selection of the most discriminative wavelet coefficients. However, it does not take the statistical dependencies between wavelet coefficients into account. We show that, when these dependencies are ignored, a lower accuracy is obtained in predicting the corrosion type. We compare several mutual information filters to take these dependencies into account in order to arrive at a more accurate prediction

    KFREAIN: Design of A Kernel-Level Forensic Layer for Improving Real-Time Evidence Analysis Performance in IoT Networks

    Get PDF
    An exponential increase in number of attacks in IoT Networks makes it essential to formulate attack-level mitigation strategies. This paper proposes design of a scalable Kernel-level Forensic layer that assists in improving real-time evidence analysis performance to assist in efficient pattern analysis of the collected data samples. It has an inbuilt Temporal Blockchain Cache (TBC), which is refreshed after analysis of every set of evidences. The model uses a multidomain feature extraction engine that combines lightweight Fourier, Wavelet, Convolutional, Gabor, and Cosine feature sets that are selected by a stochastic Bacterial Foraging Optimizer (BFO) for identification of high variance features. The selected features are processed by an ensemble learning (EL) classifier that use low complexity classifiers reducing the energy consumption during analysis by 8.3% when compared with application-level forensic models. The model also showcased 3.5% higher accuracy, 4.9% higher precision, and 4.3% higher recall of attack-event identification when compared with standard forensic techniques. Due to kernel-level integration, the model is also able to reduce the delay needed for forensic analysis on different network types by 9.5%, thus making it useful for real-time & heterogenous network scenarios
    • …
    corecore