16 research outputs found

    Effective Feature Selection for 5G IM Applications Traffic Classification

    Get PDF

    Enhancing User Experience by Extracting Application Intelligence from Network Traffic

    Full text link
    Internet Service Providers (ISPs) continue to get complaints from users on poor experience for diverse Internet applications ranging from video streaming and gaming to social media and teleconferencing. Identifying and rectifying the root cause of these experience events requires the ISP to know more than just coarse-grained measures like link utilizations and packet losses. Application classification and experience measurement using traditional deep packet inspection (DPI) techniques is starting to fail with the increasing adoption of traffic encryption and is not cost-effective with the explosive growth in traffic rates. This thesis leverages the emerging paradigms of machine learning and programmable networks to design and develop systems that can deliver application-level intelligence to ISPs at scale, cost, and accuracy that has hitherto not been achieved before. This thesis makes four new contributions. Our first contribution develops a novel transformer-based neural network model that classifies applications based on their traffic shape, agnostic to encryption. We show that this approach has over 97% f1-score for diverse application classes such as video streaming and gaming. Our second contribution builds and validates algorithmic and machine learning models to estimate user experience metrics for on-demand and live video streaming applications such as bitrate, resolution, buffer states, and stalls. For our third contribution, we analyse ten popular latency-sensitive online multiplayer games and develop data structures and algorithms to rapidly and accurately detect each game using automatically generated signatures. By combining this with active latency measurement and geolocation analysis of the game servers, we help ISPs determine better routing paths to reduce game latency. Our fourth and final contribution develops a prototype of a self-driving network that autonomously intervenes just-in-time to alleviate the suffering of applications that are being impacted by transient congestion. We design and build a complete system that extracts application-aware network telemetry from programmable switches and dynamically adapts the QoS policies to manage the bottleneck resources in an application-fair manner. We show that it outperforms known queue management techniques in various traffic scenarios. Taken together, our contributions allow ISPs to measure and tune their networks in an application-aware manner to offer their users the best possible experience

    Internet of Things and Intelligent Technologies for Efficient Energy Management in a Smart Building Environment

    Get PDF
    Internet of Things (IoT) is attempting to transform modern buildings into energy efficient, smart, and connected buildings, by imparting capabilities such as real-time monitoring, situational awareness and intelligence, and intelligent control. Digitizing the modern day building environment using IoT improves asset visibility and generates energy savings. This dissertation provides a survey of the role, impact, and challenges and recommended solutions of IoT for smart buildings. It also presents an IoT-based solution to overcome the challenge of inefficient energy management in a smart building environment. The proposed solution consists of developing an Intelligent Computational Engine (ICE), composed of various IoT devices and technologies for efficient energy management in an IoT driven building environment. ICE’s capabilities viz. energy consumption prediction and optimized control of electric loads have been developed, deployed, and dispatched in the Real-Time Power and Intelligent Systems (RTPIS) laboratory, which serves as the IoT-driven building case study environment. Two energy consumption prediction models viz. exponential model and Elman recurrent neural network (RNN) model were developed and compared to determine the most accurate model for use in the development of ICE’s energy consumption prediction capability. ICE’s prediction model was developed in MATLAB using cellular computational network (CCN) technique, whereas the optimized control model was developed jointly in MATLAB and Metasys Building Automation System (BAS) using particle swarm optimization (PSO) algorithm and logic connector tool (LCT), respectively. It was demonstrated that the developed CCN-based energy consumption prediction model was highly accurate with low error % by comparing the predicted and the measured energy consumption data over a period of one week. The predicted energy consumption values generated from the CCN model served as a reference for the PSO algorithm to generate control parameters for the optimized control of the electric loads. The LCT model used these control parameters to regulate the electric loads to save energy (increase energy efficiency) without violating any operational constraints. Having ICE’s energy consumption prediction and optimized control of electric loads capabilities is extremely useful for efficient energy management as they ensure that sufficient energy is generated to meet the demands of the electric loads optimally at any time thereby reducing wasted energy due to excess generation. This, in turn, reduces carbon emissions and generates energy and cost savings. While the ICE was tested in a small case-study environment, it could be scaled to any smart building environment

    Enabling energy-awareness for internet video

    Get PDF
    Continuous improvements to the state of the art have made it easier to create, send and receive vast quantities of video over the Internet. Catalysed by these developments, video is now the largest, and fastest growing type of traffic on modern IP networks. In 2015, video was responsible for 70% of all traffic on the Internet, with an compound annual growth rate of 27%. On the other hand, concerns about the growing energy consumption of ICT in general, continue to rise. It is not surprising that there is a significant energy cost associated with these extensive video usage patterns. In this thesis, I examine the energy consumption of typical video configurations during decoding (playback) and encoding through empirical measurements on an experimental test-bed. I then make extrapolations to a global scale to show the opportunity for significant energy savings, achievable by simple modifications to these video configurations. Based on insights gained from these measurements, I propose a novel, energy-aware Quality of Experience (QoE) metric for digital video - the Energy - Video Quality Index (EnVI). Then, I present and evaluate vEQ-benchmark, a benchmarking and measurement tool for the purpose of generating EnVI scores. The tool enables fine-grained resource-usage analyses on video playback systems, and facilitates the creation of statistical models of power usage for these systems. I propose GreenDASH, an energy-aware extension of the existing Dynamic Adaptive Streaming over HTTP standard (DASH). GreenDASH incorporates relevant energy-usage and video quality information into the existing standard. It could enable dynamic, energy-aware adaptation for video in response to energy-usage and user ‘green’ preferences. I also evaluate the subjective perception of such energy-aware, adaptive video streaming by means of a user study featuring 36 participants. I examine how video may be adapted to save energy without a significant impact on the Quality of Experience of these users. In summary, this thesis highlights the significant opportunities for energy savings if Internet users gain an awareness about their energy usage, and presents a technical discussion how this can be achieved by straightforward extensions to the current state of the art

    Towards auto-scaling in the cloud: online resource allocation techniques

    Get PDF
    Cloud computing provides an easy access to computing resources. Customers can acquire and release resources any time. However, it is not trivial to determine when and how many resources to allocate. Many applications running in the cloud face workload changes that affect their resource demand. The first thought is to plan capacity either for the average load or for the peak load. In the first case there is less cost incurred, but performance will be affected if the peak load occurs. The second case leads to money wastage, since resources will remain underutilized most of the time. Therefore there is a need for a more sophisticated resource provisioning techniques that can automatically scale the application resources according to workload demand and performance constrains. Large cloud providers such as Amazon, Microsoft, RightScale provide auto-scaling services. However, without the proper configuration and testing such services can do more harm than good. In this work I investigate application specific online resource allocation techniques that allow to dynamically adapt to incoming workload, minimize the cost of virtual resources and meet user-specified performance objectives

    Online learning on the programmable dataplane

    Get PDF
    This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observations—and argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the network— runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network. To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasible—to port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms

    Plataforma colaborativa, distribuida, escalable y de bajo costo basada en microservicios, contenedores, dispositivos móviles y servicios en la Nube para tareas de cómputo intensivo

    Get PDF
    A la hora de resolver tareas de cómputo intensivo de manera distribuida y paralela, habitualmente se utilizan recursos de hardware x86 (CPU/GPU) e infraestructura especializada (Grid, Cluster, Nube) para lograr un alto rendimiento. En sus inicios los procesadores, coprocesadores y chips x86 fueron desarrollados para resolver problemas complejos sin tener en cuenta su consumo energético. Dado su impacto directo en los costos y el medio ambiente, optimizar el uso, refrigeración y gasto energético, así como analizar arquitecturas alternativas, se convirtió en una preocupación principal de las organizaciones. Como resultado, las empresas e instituciones han propuesto diferentes arquitecturas para implementar las características de escalabilidad, flexibilidad y concurrencia. Con el objetivo de plantear una arquitectura alternativa a los esquemas tradicionales, en esta tesis se propone ejecutar las tareas de procesamiento reutilizando las capacidades ociosas de los dispositivos móviles. Estos equipos integran procesadores ARM los cuales, en contraposición a las arquitecturas tradicionales x86, fueron desarrollados con la eficiencia energética como pilar fundacional, ya que son mayormente alimentados por baterías. Estos dispositivos, en los últimos años, han incrementado su capacidad, eficiencia, estabilidad, potencia, así como también masividad y mercado; mientras conservan un precio, tamaño y consumo energético reducido. A su vez, cuentan con lapsos de ociosidad durante los períodos de carga, lo que representa un gran potencial que puede ser reutilizado. Para gestionar y explotar adecuadamente estos recursos, y convertirlos en un centro de datos de procesamiento intensivo; se diseñó, desarrolló y evaluó una plataforma distribuida, colaborativa, elástica y de bajo costo basada en una arquitectura compuesta por microservicios y contenedores orquestados con Kubernetes en ambientes de Nube y local, integrada con herramientas, metodologías y prácticas DevOps. El paradigma de microservicios permitió que las funciones desarrolladas sean fragmentadas en pequeños servicios, con responsabilidades acotadas. Las prácticas DevOps permitieron construir procesos automatizados para la ejecución de pruebas, trazabilidad, monitoreo e integración de modificaciones y desarrollo de nuevas versiones de los servicios. Finalmente, empaquetar las funciones con todas sus dependencias y librerías en contenedores ayudó a mantener servicios pequeños, inmutables, portables, seguros y estandarizados que permiten su ejecución independiente de la arquitectura subyacente. Incluir Kubernetes como Orquestador de contenedores, permitió que los servicios se puedan administrar, desplegar y escalar de manera integral y transparente, tanto a nivel local como en la Nube, garantizando un uso eficiente de la infraestructura, gastos y energía. Para validar el rendimiento, escalabilidad, consumo energético y flexibilidad del sistema, se ejecutaron diversos escenarios concurrentes de transcoding de video. De esta manera se pudo probar, por un lado, el comportamiento y rendimiento de diversos dispositivos móviles y x86 bajo diferentes condiciones de estrés. Por otro lado, se pudo mostrar cómo a través de una carga variable de tareas, la arquitectura se ajusta, flexibiliza y escala para dar respuesta a las necesidades de procesamiento. Los resultados experimentales, sobre la base de los diversos escenarios de rendimiento, carga y saturación planteados, muestran que se obtienen mejoras útiles sobre la línea de base de este estudio y que la arquitectura desarrollada es lo suficientemente robusta para considerarse una alternativa escalable, económica y elástica, respecto a los modelos tradicionales.Facultad de Informátic

    VANET-enabled eco-friendly road characteristics-aware routing for vehicular traffic

    Get PDF
    There is growing awareness of the dangers of climate change caused by greenhouse gases. In the coming decades this could result in numerous disasters such as heat-waves, flooding and crop failures. A major contributor to the total amount of greenhouse gas emissions is the transport sector, particularly private vehicles. Traffic congestion involving private vehicles also causes a lot of wasted time and stress to commuters. At the same time new wireless technologies such as Vehicular Ad-Hoc Networks (VANETs) are being developed which could allow vehicles to communicate with each other. These could enable a number of innovative schemes to reduce traffic congestion and greenhouse gas emissions. 1) EcoTrec is a VANET-based system which allows vehicles to exchange messages regarding traffic congestion and road conditions, such as roughness and gradient. Each vehicle uses the messages it has received to build a model of nearby roads and the traffic on them. The EcoTrec Algorithm then recommends the most fuel efficient route for the vehicles to follow. 2) Time-Ants is a swarm based algorithm that considers not only the amount of cars in the spatial domain but also the amoumt in the time domain. This allows the system to build a model of the traffic congestion throughout the day. As traffic patterns are broadly similar for weekdays this gives us a good idea of what traffic will be like allowing us to route the vehicles more efficiently using the Time-Ants Algorithm. 3) Electric Vehicle enhanced Dedicated Bus Lanes (E-DBL) proposes allowing electric vehicles onto the bus lanes. Such an approach could allow a reduction in traffic congestion on the regular lanes without greatly impeding the buses. It would also encourage uptake of electric vehicles. 4) A comprehensive survey of issues associated with communication centred traffic management systems was carried out
    corecore