39,887 research outputs found

    Scheduling for next generation WLANs: filling the gap between offered and observed data rates

    Get PDF
    In wireless networks, opportunistic scheduling is used to increase system throughput by exploiting multi-user diversity. Although recent advances have increased physical layer data rates supported in wireless local area networks (WLANs), actual throughput realized are significantly lower due to overhead. Accordingly, the frame aggregation concept is used in next generation WLANs to improve efficiency. However, with frame aggregation, traditional opportunistic schemes are no longer optimal. In this paper, we propose schedulers that take queue and channel conditions into account jointly, to maximize throughput observed at the users for next generation WLANs. We also extend this work to design two schedulers that perform block scheduling for maximizing network throughput over multiple transmission sequences. For these schedulers, which make decisions over long time durations, we model the system using queueing theory and determine users' temporal access proportions according to this model. Through detailed simulations, we show that all our proposed algorithms offer significant throughput improvement, better fairness, and much lower delay compared with traditional opportunistic schedulers, facilitating the practical use of the evolving standard for next generation wireless networks

    Simulation of Mixed Critical In-vehicular Networks

    Full text link
    Future automotive applications ranging from advanced driver assistance to autonomous driving will largely increase demands on in-vehicular networks. Data flows of high bandwidth or low latency requirements, but in particular many additional communication relations will introduce a new level of complexity to the in-car communication system. It is expected that future communication backbones which interconnect sensors and actuators with ECU in cars will be built on Ethernet technologies. However, signalling from different application domains demands for network services of tailored attributes, including real-time transmission protocols as defined in the TSN Ethernet extensions. These QoS constraints will increase network complexity even further. Event-based simulation is a key technology to master the challenges of an in-car network design. This chapter introduces the domain-specific aspects and simulation models for in-vehicular networks and presents an overview of the car-centric network design process. Starting from a domain specific description language, we cover the corresponding simulation models with their workflows and apply our approach to a related case study for an in-car network of a premium car

    Data Aggregation and Packet Bundling of Uplink Small Packets for Monitoring Applications in LTE

    Full text link
    In cellular massive Machine-Type Communications (MTC), a device can transmit directly to the base station (BS) or through an aggregator (intermediate node). While direct device-BS communication has recently been in the focus of 5G/3GPP research and standardization efforts, the use of aggregators remains a less explored topic. In this paper we analyze the deployment scenarios in which aggregators can perform cellular access on behalf of multiple MTC devices. We study the effect of packet bundling at the aggregator, which alleviates overhead and resource waste when sending small packets. The aggregators give rise to a tradeoff between access congestion and resource starvation and we show that packet bundling can minimize resource starvation, especially for smaller numbers of aggregators. Under the limitations of the considered model, we investigate the optimal settings of the network parameters, in terms of number of aggregators and packet-bundle size. Our results show that, in general, data aggregation can benefit the uplink massive MTC in LTE, by reducing the signalling overhead.Comment: to appear in IEEE Networ

    Optical Network Virtualisation using Multi-technology Monitoring and SDN-enabled Optical Transceiver

    Get PDF
    We introduce the real-time multi-technology transport layer monitoring to facilitate the coordinated virtualisation of optical and Ethernet networks supported by optical virtualise-able transceivers (V-BVT). A monitoring and network resource configuration scheme is proposed to include the hardware monitoring in both Ethernet and Optical layers. The scheme depicts the data and control interactions among multiple network layers under the software defined network (SDN) background, as well as the application that analyses the monitored data obtained from the database. We also present a re-configuration algorithm to adaptively modify the composition of virtual optical networks based on two criteria. The proposed monitoring scheme is experimentally demonstrated with OpenFlow (OF) extensions for a holistic (re-)configuration across both layers in Ethernet switches and V-BVTs

    Resource dimensioning through buffer sampling

    Get PDF
    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship among the traffic offered (in terms of the mean offered load , but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulas that estimate the required capacity cc as a function of the input traffic and the performance target. For the special case of Gaussian input traffic, these formulas reduce to c=M+αVc = M + \alpha V, where directly relates to the performance requirement (as agreed upon in a service level agreement) and VV reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level, the Gaussianity assumption is justified.\ud As estimating MM is relatively straightforward, the remaining open issue concerns the estimation of VV. We argue that particularly if corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of VV is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate

    Resource dimensioning through buffer sampling

    Get PDF
    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship between the traffic offered (in terms of the mean offered load M, but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulae that estimate the required capacity C as a function of the input traffic and the performance target. For the special case of Gaussian input traffic these formulae reduce to C = M+V , where directly relates to the performance requirement (as agreed upon in a service level agreement) and V reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level the Gaussianity assumption is justified.\ud As estimating M is relatively straightforward, the remaining open issue concerns the estimation of V . We argue that, particularly if V corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of V is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate

    Enabling RAN Slicing Through Carrier Aggregation in mmWave Cellular Networks

    Full text link
    The ever increasing number of connected devices and of new and heterogeneous mobile use cases implies that 5G cellular systems will face demanding technical challenges. For example, Ultra-Reliable Low-Latency Communication (URLLC) and enhanced Mobile Broadband (eMBB) scenarios present orthogonal Quality of Service (QoS) requirements that 5G aims to satisfy with a unified Radio Access Network (RAN) design. Network slicing and mmWave communications have been identified as possible enablers for 5G. They provide, respectively, the necessary scalability and flexibility to adapt the network to each specific use case environment, and low latency and multi-gigabit-per-second wireless links, which tap into a vast, currently unused portion of the spectrum. The optimization and integration of these technologies is still an open research challenge, which requires innovations at different layers of the protocol stack. This paper proposes to combine them in a RAN slicing framework for mmWaves, based on carrier aggregation. Notably, we introduce MilliSlice, a cross-carrier scheduling policy that exploits the diversity of the carriers and maximizes their utilization, thus simultaneously guaranteeing high throughput for the eMBB slices and low latency and high reliability for the URLLC flows.Comment: 8 pages, 8 figures. Proc. of the 18th Mediterranean Communication and Computer Networking Conference (MedComNet 2020), Arona, Italy, 202
    • …
    corecore