13 research outputs found

    Dual Queue Coupled AQM: Deployable Very Low Queuing Delay for All

    Full text link
    On the Internet, sub-millisecond queueing delay and capacity-seeking have traditionally been considered mutually exclusive. We introduce a service that offers both: Low Latency Low Loss Scalable throughput (L4S). When tested under a wide range of conditions emulated on a testbed using real residential broadband equipment, queue delay remained both low (median 100--300 Ī¼\mus) and consistent (99th percentile below 2 ms even under highly dynamic workloads), without compromising other metrics (zero congestion loss and close to full utilization). L4S exploits the properties of `Scalable' congestion controls (e.g., DCTCP, TCP Prague). Flows using such congestion control are however very aggressive, which causes a deployment challenge as L4S has to coexist with so-called `Classic' flows (e.g., Reno, CUBIC). This paper introduces an architectural solution: `Dual Queue Coupled Active Queue Management', which enables balance between Scalable and Classic flows. It counterbalances the more aggressive response of Scalable flows with more aggressive marking, without having to inspect flow identifiers. The Dual Queue structure has been implemented as a Linux queuing discipline. It acts like a semi-permeable membrane, isolating the latency of Scalable and `Classic' traffic, but coupling their capacity into a single bandwidth pool. This paper justifies the design and implementation choices, and visualizes a representative selection of hundreds of thousands of experiment runs to test our claims.Comment: Preprint. 17pp, 12 Figs, 60 refs. Submitted to IEEE/ACM Transactions on Networkin

    Distributed Real-time Systems - Deterministic Protocols for Wireless Networks and Model-Driven Development with SDL

    Get PDF
    In a networked system, the communication system is indispensable but often the weakest link w.r.t. performance and reliability. This, particularly, holds for wireless communication systems, where the error- and interference-prone medium and the character of network topologies implicate special challenges. However, there are many scenarios of wireless networks, in which a certain quality-of-service has to be provided despite these conditions. In this regard, distributed real-time systems, whose realization by wireless multi-hop networks becomes increasingly popular, are a particular challenge. For such systems, it is of crucial importance that communication protocols are deterministic and come with the required amount of efficiency and predictability, while additionally considering scarce hardware resources that are a major limiting factor of wireless sensor nodes. This, in turn, does not only place demands on the behavior of a protocol but also on its implementation, which has to comply with timing and resource constraints. The first part of this thesis presents a deterministic protocol for wireless multi-hop networks with time-critical behavior. The protocol is referred to as Arbitrating and Cooperative Transfer Protocol (ACTP), and is an instance of a binary countdown protocol. It enables the reliable transfer of bit sequences of adjustable length and deterministically resolves contest among nodes based on a flexible priority assignment, with constant delays, and within configurable arbitration radii. The protocol's key requirement is the collision-resistant encoding of bits, which is achieved by the incorporation of black bursts. Besides revisiting black bursts and proposing measures to optimize their detection, robustness, and implementation on wireless sensor nodes, the first part of this thesis presents the mode of operation and time behavior of ACTP. In addition, possible applications of ACTP are illustrated, presenting solutions to well-known problems of distributed systems like leader election and data dissemination. Furthermore, results of experimental evaluations with customary wireless transceivers are outlined to provide evidence of the protocol's implementability and benefits. In the second part of this thesis, the focus is shifted from concrete deterministic protocols to their model-driven development with the Specification and Description Language (SDL). Though SDL is well-established in the domain of telecommunication and distributed systems, the predictability of its implementations is often insufficient as previous projects have shown. To increase this predictability and to improve SDL's applicability to time-critical systems, real-time tasks, an approved concept in the design of real-time systems, are transferred to SDL and extended to cover node-spanning system tasks. In this regard, a priority-based execution and suspension model is introduced in SDL, which enables task-specific priority assignments in the SDL specification that are orthogonal to the static structure of SDL systems and control transition execution orders on design as well as on implementation level. Both the formal incorporation of real-time tasks into SDL and their implementation in a novel scheduling strategy are discussed in this context. By means of evaluations on wireless sensor nodes, evidence is provided that these extensions reduce worst-case execution times substantially, and improve the predictability of SDL implementations and the language's applicability to real-time systems

    Analysis of discrete-time queueing systems with multidimensional state space

    Get PDF

    Sustainable scheduling policies for radio access networks based on LTE technology

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyIn the LTE access networks, the Radio Resource Management (RRM) is one of the most important modules which is responsible for handling the overall management of radio resources. The packet scheduler is a particular sub-module which assigns the existing radio resources to each user in order to deliver the requested services in the most efficient manner. Data packets are scheduled dynamically at every Transmission Time Interval (TTI), a time window used to take the userā€™s requests and to respond them accordingly. The scheduling procedure is conducted by using scheduling rules which select different users to be scheduled at each TTI based on some priority metrics. Various scheduling rules exist and they behave differently by balancing the scheduler performance in the direction imposed by one of the following objectives: increasing the system throughput, maintaining the user fairness, respecting the Guaranteed Bit Rate (GBR), Head of Line (HoL) packet delay, packet loss rate and queue stability requirements. Most of the static scheduling rules follow the sequential multi-objective optimization in the sense that when the first targeted objective is satisfied, then other objectives can be prioritized. When the targeted scheduling objective(s) can be satisfied at each TTI, the LTE scheduler is considered to be optimal or feasible. So, the scheduling performance depends on the exploited rule being focused on particular objectives. This study aims to increase the percentage of feasible TTIs for a given downlink transmission by applying a mixture of scheduling rules instead of using one discipline adopted across the entire scheduling session. Two types of optimization problems are proposed in this sense: Dynamic Scheduling Rule based Sequential Multi-Objective Optimization (DSR-SMOO) when the applied scheduling rules address the same objective and Dynamic Scheduling Rule based Concurrent Multi-Objective Optimization (DSR-CMOO) if the pool of rules addresses different scheduling objectives. The best way of solving such complex optimization problems is to adapt and to refine scheduling policies which are able to call different rules at each TTI based on the best matching scheduler conditions (states). The idea is to develop a set of non-linear functions which maps the scheduler state at each TTI in optimal distribution probabilities of selecting the best scheduling rule. Due to the multi-dimensional and continuous characteristics of the scheduler state space, the scheduling functions should be approximated. Moreover, the function approximations are learned through the interaction with the RRM environment. The Reinforcement Learning (RL) algorithms are used in this sense in order to evaluate and to refine the scheduling policies for the considered DSR-SMOO/CMOO optimization problems. The neural networks are used to train the non-linear mapping functions based on the interaction among the intelligent controller, the LTE packet scheduler and the RRM environment. In order to enhance the convergence in the feasible state and to reduce the scheduler state space dimension, meta-heuristic approaches are used for the channel statement aggregation. Simulation results show that the proposed aggregation scheme is able to outperform other heuristic methods. When the aggregation scheme of the channel statements is exploited, the proposed DSR-SMOO/CMOO problems focusing on different objectives which are solved by using various RL approaches are able to: increase the mean percentage of feasible TTIs, minimize the number of TTIs when the RL approaches punish the actions taken TTI-by-TTI, and minimize the variation of the performance indicators when different simulations are launched in parallel. This way, the obtained scheduling policies being focused on the multi-objective criteria are sustainable. Keywords: LTE, packet scheduling, scheduling rules, multi-objective optimization, reinforcement learning, channel, aggregation, scheduling policies, sustainable

    Abstracting Application Development for Resource Constrained Wireless Sensor Networks

    Get PDF
    Ubiquitous computing is a concept whereby computing is distributed across smart objects surrounding users, creating ambient intelligence. Ubiquitous applications use technologies such as the Internet, sensors, actuators, embedded computers, wireless communication, and new user interfaces. The Internet-of-Things (IoT) is one of the key concepts in the realization of ubiquitous computing, whereby smart objects communicate with each other and the Internet. Further, Wireless Sensor Networks (WSNs) are a sub-group of IoT technologies that consist of geographically distributed devices or nodes, capable of sensing and actuating the environment.WSNs typically contain tens to thousands of nodes that organize and operate autonomously to perform application-dependent sensing and sensor data processing tasks. The projected applications require nodes to be small in physical size and low-cost, and have a long lifetime with limited energy resources, while performing complex computing and communications tasks. As a result, WSNs are complex distributed systems that are constrained by communications, computing and energy resources. WSN functionality is dynamic according to the environment and application requirements. Dynamic multitasking, task distribution, task injection, and software updates are required in ļ¬eld experiments for possibly thousands of nodes functioning in harsh environments.The development of WSN application software requires the abstraction of computing, communication, data access, and heterogeneous sensor data sources to reduce the complexities. Abstractions enable the faster development of new applications with a better reuse of existing software, as applications are composed of high-level tasks that use the services provided by the devices to execute the application logic.The main research question of this thesis is: What abstractions are needed for application development for resource constrained WSNs? This thesis models WSN abstractions with three levels that build on top of each other: 1) node abstraction, 2) network abstraction, and 3) infrastructure abstraction. The node abstraction hides the details in the use of the sensing, communication, and processing hardware. The network abstraction speciļ¬es methods of discovering and accessing services, and distributing processing in the network. The infrastructure abstraction uniļ¬es different sensing technologies and infrastructure computing platforms.As a contribution, this thesis presents the abstraction model with a review of each abstraction level. Several designs for each of the levels are tested and veriļ¬ed with proofs of concept and analyses of ļ¬eld experiments. The resulting designs consist of an operating system kernel, a software update method, a data uniļ¬cation interface, and all abstraction levels combining abstraction called an embedded cloud.The presented operating system kernel has a scalable overhead and provides a programming approach similar to a desktop computer operating system with threads and processes. An over-the-air update method combines low overhead and robust software updating with application task dissemination. The data uniļ¬cation interface homogenizes the access to the data of heterogeneous sensor networks. A uniļ¬cation model is used for various use cases by mapping everything as measurements. The embedded cloud allows resource constrained WSNs to share services and data, and expand resources with other technologies. The embedded cloud allows the distributed processing of applications according to the available services. The applications are implemented as processes using a hardware independent description language that can be executed on resource constrained WSNs. The lessons of practical ļ¬eld experimenting are analyzed to study the importance of the abstractions. Software complexities encountered in the ļ¬eld experiments highlight the need for suitable abstractions.The results of this thesis are tested using proof of concept implementations on real WSN hardware which is constrained by computing power in the order of a few MIPS, memory sizes of a few kilobytes, and small sized batteries. The results will remain usable in the future, as the vast amount, tight integration, and low-cost of future IoT devices require the combination of complex computation with resource constrained platforms

    Investigation of EDFA power transients in circuit-switched and packet-switched optical networks

    Get PDF
    Erbium-doped fibre amplifiers (EDFAā€™s) are a key technology for the design of all optical communication systems and networks. The superiority of EDFAs lies in their negligible intermodulation distortion across high speed multichannel signals, low intrinsic losses, slow gain dynamics, and gain in a wide range of optical wavelengths. Due to long lifetime in excited states, EDFAs do not oppose the effect of cross-gain saturation. The time characteristics of the gain saturation and recovery effects are between a few hundred microseconds and 10 milliseconds. However, in wavelength division multiplexed (WDM) optical networks with EDFAs, the number of channels traversing an EDFA can change due to the faulty link of the network or the system reconfiguration. It has been found that, due to the variation in channel number in the EDFAs chain, the output system powers of surviving channels can change in a very short time. Thus, the power transient is one of the problems deteriorating system performance. In this thesis, the transient phenomenon in wavelength routed WDM optical networks with EDFA chains was investigated. The task was performed using different input signal powers for circuit switched networks. A simulator for the EDFA gain dynamicmodel was developed to compute the magnitude and speed of the power transients in the non-self-saturated EDFA both single and chained. The dynamic model of the self-saturated EDFAs chain and its simulator were also developed to compute the magnitude and speed of the power transients and the Optical signal-to-noise ratio (OSNR). We found that the OSNR transient magnitude and speed are a function of both the output power transient and the number of EDFAs in the chain. The OSNR value predicts the level of the quality of service in the related network. It was found that the power transients for both self-saturated and non-self-saturated EDFAs are close in magnitude in the case of gain saturated EDFAs networks. Moreover, the cross-gain saturation also degrades the performance of the packet switching networks due to varying traffic characteristics. The magnitude and the speed of output power transients increase along the EDFAs chain. An investigation was done on the asynchronous transfer mode (ATM) or the WDM Internet protocol (WDM-IP) traffic networks using different traffic patterns based on the Pareto and Poisson distribution. The simulator is used to examine the amount and speed of the power transients in Pareto and Poisson distributed traffic at different bit rates, with specific focus on 2.5 Gb/s. It was found from numerical and statistical analysis that the power swing increases if the time interval of theburst-ON/burst-OFF is long in the packet bursts. This is because the gain dynamics is fast during strong signal pulse or with long duration pulses, which is due to the stimulatedemission avalanche depletion of the excited ions. Thus, an increase in output power levelcould lead to error burst which affects the system performance

    Abstracting Application Development for Resource Constrained Wireless Sensor Networks

    Get PDF
    Ubiquitous computing is a concept whereby computing is distributed across smart objects surrounding users, creating ambient intelligence. Ubiquitous applications use technologies such as the Internet, sensors, actuators, embedded computers, wireless communication, and new user interfaces. The Internet-of-Things (IoT) is one of the key concepts in the realization of ubiquitous computing, whereby smart objects communicate with each other and the Internet. Further, Wireless Sensor Networks (WSNs) are a sub-group of IoT technologies that consist of geographically distributed devices or nodes, capable of sensing and actuating the environment.WSNs typically contain tens to thousands of nodes that organize and operate autonomously to perform application-dependent sensing and sensor data processing tasks. The projected applications require nodes to be small in physical size and low-cost, and have a long lifetime with limited energy resources, while performing complex computing and communications tasks. As a result, WSNs are complex distributed systems that are constrained by communications, computing and energy resources. WSN functionality is dynamic according to the environment and application requirements. Dynamic multitasking, task distribution, task injection, and software updates are required in ļ¬eld experiments for possibly thousands of nodes functioning in harsh environments.The development of WSN application software requires the abstraction of computing, communication, data access, and heterogeneous sensor data sources to reduce the complexities. Abstractions enable the faster development of new applications with a better reuse of existing software, as applications are composed of high-level tasks that use the services provided by the devices to execute the application logic.The main research question of this thesis is: What abstractions are needed for application development for resource constrained WSNs? This thesis models WSN abstractions with three levels that build on top of each other: 1) node abstraction, 2) network abstraction, and 3) infrastructure abstraction. The node abstraction hides the details in the use of the sensing, communication, and processing hardware. The network abstraction speciļ¬es methods of discovering and accessing services, and distributing processing in the network. The infrastructure abstraction uniļ¬es different sensing technologies and infrastructure computing platforms.As a contribution, this thesis presents the abstraction model with a review of each abstraction level. Several designs for each of the levels are tested and veriļ¬ed with proofs of concept and analyses of ļ¬eld experiments. The resulting designs consist of an operating system kernel, a software update method, a data uniļ¬cation interface, and all abstraction levels combining abstraction called an embedded cloud.The presented operating system kernel has a scalable overhead and provides a programming approach similar to a desktop computer operating system with threads and processes. An over-the-air update method combines low overhead and robust software updating with application task dissemination. The data uniļ¬cation interface homogenizes the access to the data of heterogeneous sensor networks. A uniļ¬cation model is used for various use cases by mapping everything as measurements. The embedded cloud allows resource constrained WSNs to share services and data, and expand resources with other technologies. The embedded cloud allows the distributed processing of applications according to the available services. The applications are implemented as processes using a hardware independent description language that can be executed on resource constrained WSNs. The lessons of practical ļ¬eld experimenting are analyzed to study the importance of the abstractions. Software complexities encountered in the ļ¬eld experiments highlight the need for suitable abstractions.The results of this thesis are tested using proof of concept implementations on real WSN hardware which is constrained by computing power in the order of a few MIPS, memory sizes of a few kilobytes, and small sized batteries. The results will remain usable in the future, as the vast amount, tight integration, and low-cost of future IoT devices require the combination of complex computation with resource constrained platforms
    corecore