75 research outputs found

    Exploiting the power of multiplicity: a holistic survey of network-layer multipath

    Get PDF
    The Internet is inherently a multipath network: For an underlying network with only a single path, connecting various nodes would have been debilitatingly fragile. Unfortunately, traditional Internet technologies have been designed around the restrictive assumption of a single working path between a source and a destination. The lack of native multipath support constrains network performance even as the underlying network is richly connected and has redundant multiple paths. Computer networks can exploit the power of multiplicity, through which a diverse collection of paths is resource pooled as a single resource, to unlock the inherent redundancy of the Internet. This opens up a new vista of opportunities, promising increased throughput (through concurrent usage of multiple paths) and increased reliability and fault tolerance (through the use of multiple paths in backup/redundant arrangements). There are many emerging trends in networking that signify that the Internet's future will be multipath, including the use of multipath technology in data center computing; the ready availability of multiple heterogeneous radio interfaces in wireless (such as Wi-Fi and cellular) in wireless devices; ubiquity of mobile devices that are multihomed with heterogeneous access networks; and the development and standardization of multipath transport protocols such as multipath TCP. The aim of this paper is to provide a comprehensive survey of the literature on network-layer multipath solutions. We will present a detailed investigation of two important design issues, namely, the control plane problem of how to compute and select the routes and the data plane problem of how to split the flow on the computed paths. The main contribution of this paper is a systematic articulation of the main design issues in network-layer multipath routing along with a broad-ranging survey of the vast literature on network-layer multipathing. We also highlight open issues and identify directions for future work

    Concurrent Multipath Transfer: Scheduling, Modelling, and Congestion Window Management

    Get PDF
    Known as smartphones, multihomed devices like the iPhone and BlackBerry can simultaneously connect to Wi-Fi and 4G LTE networks. Unfortunately, due to the architectural constraints of standard transport layer protocols like the transmission control protocol (TCP), an Internet application (e.g., a file transfer) can use only one access network at a time. Due to recent developments, however, concurrent multipath transfer (CMT) using the stream control transmission protocol (SCTP) can enable multihomed devices to exploit additional network resources for transport layer communications. In this thesis we explore a variety of techniques aimed at CMT and multihomed devices, such as: packet scheduling, transport layer modelling, and resource management. Some of our accomplishments include, but are not limited to: enhanced performance of CMT under delay-based disparity, a tractable framework for modelling the throughput of CMT, a comparison of modelling techniques for SCTP, a new congestion window update policy for CMT, and efficient use of system resources through optimization. Since the demand for a better communications system is always on the horizon, it is our goal to further the research and inspire others to embrace CMT as a viable network architecture; in hopes that someday CMT will become a standard part of smartphone technology

    Connection robustness for wireless moving networks using transport layer multi-homing

    Get PDF
    Given any form of mobility management through wireless communication, one useful enhancement is improving the reliability and robustness of transport-layer connections in a heterogeneous mobile environment. This is particularly true in the case of mobile networks with multiple vertical handovers. In this thesis, issues and challenges in mobility management for mobile terminals in such a scenario are addressed, and a number of techniques to facilitate and improve efficiency and the QoS for such a handover are proposed and investigated. These are initially considered in an end-to-end context and all protocols and changes happened in the middleware of the connection where the network is involved with handover issues and end user transparency is satisfied. This thesis begins by investigating mobility management solutions particularly the transport layer models, also making significant observation pertinent to multi-homing for moving networks in general. A new scheme for transport layer tunnelling based on SCTP is proposed. Consequently a novel protocol to handle seamless network mobility in heterogeneous mobile networks, named nSCTP, is proposed. Efficiency of this protocol in relation to QoS for handover parameters in an end-to-end connection while wired and wireless networks are available is considered. Analytically and experimentally it has been proved that this new scheme can significantly increase the throughput, particularly when the mobile networks roam frequently. The detailed plan for the future improvements and expansion is also provided.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Towards video streaming in IoT environments: vehicular communication perspective

    Get PDF
    Multimedia oriented Internet of Things (IoT) enables pervasive and real-time communication of video, audio and image data among devices in an immediate surroundings. Today's vehicles have the capability of supporting real time multimedia acquisition. Vehicles with high illuminating infrared cameras and customized sensors can communicate with other on-road devices using dedicated short-range communication (DSRC) and 5G enabled communication technologies. Real time incidence of both urban and highway vehicular traffic environment can be captured and transmitted using vehicle-to-vehicle and vehicle-to-infrastructure communication modes. Video streaming in vehicular IoT (VSV-IoT) environments is in growing stage with several challenges that need to be addressed ranging from limited resources in IoT devices, intermittent connection in vehicular networks, heterogeneous devices, dynamism and scalability in video encoding, bandwidth underutilization in video delivery, and attaining application-precise quality of service in video streaming. In this context, this paper presents a comprehensive review on video streaming in IoT environments focusing on vehicular communication perspective. Specifically, significance of video streaming in vehicular IoT environments is highlighted focusing on integration of vehicular communication with 5G enabled IoT technologies, and smart city oriented application areas for VSV-IoT. A taxonomy is presented for the classification of related literature on video streaming in vehicular network environments. Following the taxonomy, critical review of literature is performed focusing on major functional model, strengths and weaknesses. Metrics for video streaming in vehicular IoT environments are derived and comparatively analyzed in terms of their usage and evaluation capabilities. Open research challenges in VSV-IoT are identified as future directions of research in the area. The survey would benefit both IoT and vehicle industry practitioners and researchers, in terms of augmenting understanding of vehicular video streaming and its IoT related trends and issues

    Next-Generation Self-Organizing Networks through a Machine Learning Approach

    Get PDF
    Fecha de lectura de Tesis Doctoral: 17 Diciembre 2018.Para reducir los costes de gestión de las redes celulares, que, con el tiempo, aumentaban en complejidad, surgió el concepto de las redes autoorganizadas, o self-organizing networks (SON). Es decir, la automatización de las tareas de gestión de una red celular para disminuir los costes de infraestructura (CAPEX) y de operación (OPEX). Las tareas de las SON se dividen en tres categorías: autoconfiguración, autooptimización y autocuración. El objetivo de esta tesis es la mejora de las funciones SON a través del desarrollo y uso de herramientas de aprendizaje automático (machine learning, ML) para la gestión de la red. Por un lado, se aborda la autocuración a través de la propuesta de una novedosa herramienta para una diagnosis automática (RCA), consistente en la combinación de múltiples sistemas RCA independientes para el desarrollo de un sistema compuesto de RCA mejorado. A su vez, para aumentar la precisión de las herramientas de RCA mientras se reducen tanto el CAPEX como el OPEX, en esta tesis se proponen y evalúan herramientas de ML de reducción de dimensionalidad en combinación con herramientas de RCA. Por otro lado, en esta tesis se estudian las funcionalidades multienlace dentro de la autooptimización y se proponen técnicas para su gestión automática. En el campo de las comunicaciones mejoradas de banda ancha, se propone una herramienta para la gestión de portadoras radio, que permite la implementación de políticas del operador, mientras que, en el campo de las comunicaciones vehiculares de baja latencia, se propone un mecanismo multicamino para la redirección del tráfico a través de múltiples interfaces radio. Muchos de los métodos propuestos en esta tesis se han evaluado usando datos provenientes de redes celulares reales, lo que ha permitido demostrar su validez en entornos realistas, así como su capacidad para ser desplegados en redes móviles actuales y futuras

    Application of cognitive radio based sensor network in smart grids for efficient, holistic monitoring and control.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Durban.This thesis is directed towards the application of cognitive radio based sensor network (CRSN) in smart grid (SG) for efficient, holistic monitoring and control. The work involves enabling of sensor network and wireless communication devices for spectra utilization via the capability of Dynamic Spectrum Access (DSA) of a cognitive radio (CR) as well as end to end communication access technology for unified monitoring and control in smart grids. Smart Grid (SG) is a new power grid paradigm that can provide predictive information and recommendations to utilities, including their suppliers, and their customers on how best to manage power delivery and consumption. SG can greatly reduce air pollution from our surrounding by renewable power sources such as wind energy, solar plants and huge hydro stations. SG also reduces electricity blackouts and surges. Communication network is the foundation for modern SG. Implementing an improved communication solution will help in addressing the problems of the existing SG. Hence, this study proposed and implemented improved CRSN model which will help to ultimately evade the inherent problems of communication network in the SG such as: energy inefficiency, interference, spectrum inefficiencies, poor quality of service (QoS), latency and throughput. To overcome these problems, the existing approach which is more predominant is the use of wireless sensor network (WSNs) for communication needs in SG. However, WSNs have low battery power, low computational complexity, low bandwidth support, and high latency or delay due to multihop transmission in existing WSN topology. Consequently, solving these problems by addressing energy efficiency, bandwidth or throughput, and latency have not been fully realized due to the limitations in the WSN and the existing network topology. Therefore, existing approach has not fully addressed the communication needs in SG. SG can be fully realized by integrating communication network technologies infrastructures into the power grid. Cognitive Radio-based Sensor Network (CRSN) is considered a feasible solution to enhance various aspects of the electric power grid such as communication with end and remote devices in real-time manner for efficient monitoring and to realize maximum benefits of a smart grid system. CRSN in SG is aimed at addressing the problem of spectrum inefficiency and interference which wireless sensor network (WSN) could not. However, numerous challenges for CRSNs are due to the harsh environmental wireless condition in a smart grid system. As a result, latency, throughput and reliability become critical issues. To overcome these challenges, lots of approaches can be adopted ranging from integration of CRSNs into SGs; proper implementation design model for SG; reliable communication access devices for SG; key immunity requirements for communication infrastructure in SG; up to communication network protocol optimization and so on. To this end, this study utilized the National Institute of Standard (NIST) framework for SG interoperability in the design of unified communication network architecture including implementation model for guaranteed quality of service (QoS) of smart grid applications. This involves virtualized network in form of multi-homing comprising low power wide area network (LPWAN) devices such as LTE CAT1/LTE-M, and TV white space band device (TVBD). Simulation and analysis show that the performance of the developed modules architecture outperforms the legacy wireless systems in terms of latency, blocking probability, and throughput in SG harsh environmental condition. In addition, the problem of multi correlation fading channels due to multi antenna channels of the sensor nodes in CRSN based SG has been addressed by the performance analysis of a moment generating function (MGF) based M-QAM error probability over Nakagami-q dual correlated fading channels with maximum ratio combiner (MRC) receiver technique which includes derivation and novel algorithmic approach. The results of the MATLAB simulation are provided as a guide for sensor node deployment in order to avoid the problem of multi correlation in CRSN based SGs. SGs application requires reliable and efficient communication with low latency in timely manner as well as adequate topology of sensor nodes deployment for guaranteed QoS. Another important requirement is the need for an optimized protocol/algorithms for energy efficiency and cross layer spectrum aware made possible for opportunistic spectrum access in the CRSN nodes. Consequently, an optimized cross layer interaction of the physical and MAC layer protocols using various novel algorithms and techniques was developed. This includes a novel energy efficient distributed heterogeneous clustered spectrum aware (EDHC- SA) multichannel sensing signal model with novel algorithm called Equilateral triangulation algorithm for guaranteed network connectivity in CRSN based SG. The simulation results further obtained confirm that EDHC-SA CRSN model outperforms conventional ZigBee WSN in terms of bit error rate (BER), end-to-end delay (latency) and energy consumption. This no doubt validates the suitability of the developed model in SG

    Latency-bandwidth tradeoffs in Internet applications

    Get PDF
    Wide-area Internet links are slow, expensive, and unreliable. This affects applications in two distinct ways. Back-end data processing applications, which need to transfer large amounts of data between data centers across the world, are primarily constrained by the limited capacity of Internet links. Front-end user facing applications, on the other hand, are primarily latency-sensitive, and are bottlenecked by the high, unpredictably variable delays in the wide-area network. Our work exploits this asymmetry in applications' requirements by developing techniques that trade off one of bandwidth and latency to improve the other. We first consider the problem of supporting analytics over the large volumes of geographically dispersed data produced by global-scale organizations. Current solutions for analyzing this data as a whole operate by copying it to a single central data center, an approach that incurs substantial data transfer costs. We instead propose an alternative geo-distributed approach, orchestrating distributed execution across data centers. Our system, Geode, incorporates two key optimizations --- a low-level syntactic network redundancy elimination mechanism, and a high-level semantically aware workload optimization process --- both of which operate by trading off increased processing overhead (and computation latency) within data centers for a reduction in cross-data center bandwidth usage. In experiments we find that Geode achieves an up to 360x cost reduction compared to the current centralized baseline on a range of workloads, both real and synthetic. Next, we evaluate a simple, general purpose technique for trading off bandwidth for reduced latency: initiate redundant copies of latency sensitive operations and take the first copy to complete. While redundancy has been explored in some past systems, its use is typically avoided because of a fear of the overhead that it adds. We study the latency-bandwidth tradeoff due to redundancy and (i) show via empirical evaluation that its use is indeed a net positive in a number of important applications, and (ii) provide a theoretical characterization of its effect, identifying when it should and should not be used and how systems can tune their use of redundancy to maximum effect. Our results suggest that redundancy should be used much more widely than it currently is
    corecore