5,565 research outputs found

    A machine learning-based framework for preventing video freezes in HTTP adaptive streaming

    Get PDF
    HTTP Adaptive Streaming (HAS) represents the dominant technology to deliver videos over the Internet, due to its ability to adapt the video quality to the available bandwidth. Despite that, HAS clients can still suffer from freezes in the video playout, the main factor influencing users' Quality of Experience (QoE). To reduce video freezes, we propose a network-based framework, where a network controller prioritizes the delivery of particular video segments to prevent freezes at the clients. This framework is based on OpenFlow, a widely adopted protocol to implement the software-defined networking principle. The main element of the controller is a Machine Learning (ML) engine based on the random undersampling boosting algorithm and fuzzy logic, which can detect when a client is close to a freeze and drive the network prioritization to avoid it. This decision is based on measurements collected from the network nodes only, without any knowledge on the streamed videos or on the clients' characteristics. In this paper, we detail the design of the proposed ML-based framework and compare its performance with other benchmarking HAS solutions, under various video streaming scenarios. Particularly, we show through extensive experimentation that the proposed approach can reduce video freezes and freeze time with about 65% and 45% respectively, when compared to benchmarking algorithms. These results represent a major improvement for the QoE of the users watching multimedia content online

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols

    NeuRoute: Predictive Dynamic Routing for Software-Defined Networks

    Full text link
    This paper introduces NeuRoute, a dynamic routing framework for Software Defined Networks (SDN) entirely based on machine learning, specifically, Neural Networks. Current SDN/OpenFlow controllers use a default routing based on Dijkstra algorithm for shortest paths, and provide APIs to develop custom routing applications. NeuRoute is a controller-agnostic dynamic routing framework that (i) predicts traffic matrix in real time, (ii) uses a neural network to learn traffic characteristics and (iii) generates forwarding rules accordingly to optimize the network throughput. NeuRoute achieves the same results as the most efficient dynamic routing heuristic but in much less execution time.Comment: Accepted for CNSM 201

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Wireless Communications in the Era of Big Data

    Full text link
    The rapidly growing wave of wireless data service is pushing against the boundary of our communication network's processing power. The pervasive and exponentially increasing data traffic present imminent challenges to all the aspects of the wireless system design, such as spectrum efficiency, computing capabilities and fronthaul/backhaul link capacity. In this article, we discuss the challenges and opportunities in the design of scalable wireless systems to embrace such a "bigdata" era. On one hand, we review the state-of-the-art networking architectures and signal processing techniques adaptable for managing the bigdata traffic in wireless networks. On the other hand, instead of viewing mobile bigdata as a unwanted burden, we introduce methods to capitalize from the vast data traffic, for building a bigdata-aware wireless network with better wireless service quality and new mobile applications. We highlight several promising future research directions for wireless communications in the mobile bigdata era.Comment: This article is accepted and to appear in IEEE Communications Magazin

    Cognition-Based Networks: A New Perspective on Network Optimization Using Learning and Distributed Intelligence

    Get PDF
    IEEE Access Volume 3, 2015, Article number 7217798, Pages 1512-1530 Open Access Cognition-based networks: A new perspective on network optimization using learning and distributed intelligence (Article) Zorzi, M.a , Zanella, A.a, Testolin, A.b, De Filippo De Grazia, M.b, Zorzi, M.bc a Department of Information Engineering, University of Padua, Padua, Italy b Department of General Psychology, University of Padua, Padua, Italy c IRCCS San Camillo Foundation, Venice-Lido, Italy View additional affiliations View references (107) Abstract In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS, we propose to combine this learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared with the past and current research efforts in this area, the technical approach outlined in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication network

    Exploring Path Computation Techniques in Software-Defined Networking: A Review and Performance Evaluation of Centralized, Distributed, and Hybrid Approaches

    Get PDF
    Software-Defined Networking (SDN) is a networking paradigm that allows network administrators to dynamically manage network traffic flows and optimize network performance. One of the key benefits of SDN is the ability to compute and direct traffic along efficient paths through the network. In recent years, researchers have proposed various SDN-based path computation techniques to improve network performance and reduce congestion. This review paper provides a comprehensive overview of SDN-based path computation techniques, including both centralized and distributed approaches. We discuss the advantages and limitations of each approach and provide a critical analysis of the existing literature. In particular, we focus on recent advances in SDN-based path computation techniques, including Dynamic Shortest Path (DSP), Distributed Flow-Aware Path Computation (DFAPC), and Hybrid Path Computation (HPC). We evaluate three SDN-based path computation algorithms: centralized, distributed, and hybrid, focusing on optimal path determination for network nodes. Test scenarios with random graph simulations are used to compare their performance. The centralized algorithm employs global network knowledge, the distributed algorithm relies on local information, and the hybrid approach combines both. Experimental results demonstrate the hybrid algorithm's superiority in minimizing path costs, striking a balance between optimization and efficiency. The centralized algorithm ranks second, while the distributed algorithm incurs higher costs due to limited local knowledge. This research offers insights into efficient path computation and informs future SDN advancements. We also discuss the challenges associated with implementing SDN-based path computation techniques, including scalability, security, and interoperability. Furthermore, we highlight the potential applications of SDN-based path computation techniques in various domains, including data center networks, wireless networks, and the Internet of Things (IoT). Finally, we conclude that SDN-based path computation techniques have the potential to significantly improvement in-order to improve network performance and reduce congestion. However, further research is needed to evaluate the effectiveness of these techniques under different network conditions and traffic patterns. With the rapid growth of SDN technology, we expect to see continued development and refinement of SDN-based path computation techniques in the future
    • …
    corecore