2,033 research outputs found

    Scheduling for next generation WLANs: filling the gap between offered and observed data rates

    Get PDF
    In wireless networks, opportunistic scheduling is used to increase system throughput by exploiting multi-user diversity. Although recent advances have increased physical layer data rates supported in wireless local area networks (WLANs), actual throughput realized are significantly lower due to overhead. Accordingly, the frame aggregation concept is used in next generation WLANs to improve efficiency. However, with frame aggregation, traditional opportunistic schemes are no longer optimal. In this paper, we propose schedulers that take queue and channel conditions into account jointly, to maximize throughput observed at the users for next generation WLANs. We also extend this work to design two schedulers that perform block scheduling for maximizing network throughput over multiple transmission sequences. For these schedulers, which make decisions over long time durations, we model the system using queueing theory and determine users' temporal access proportions according to this model. Through detailed simulations, we show that all our proposed algorithms offer significant throughput improvement, better fairness, and much lower delay compared with traditional opportunistic schedulers, facilitating the practical use of the evolving standard for next generation wireless networks

    An Analytical Solution for Probabilistic Guarantees of Reservation Based Soft Real-Time Systems

    Full text link
    We show a methodology for the computation of the probability of deadline miss for a periodic real-time task scheduled by a resource reservation algorithm. We propose a modelling technique for the system that reduces the computation of such a probability to that of the steady state probability of an infinite state Discrete Time Markov Chain with a periodic structure. This structure is exploited to develop an efficient numeric solution where different accuracy/computation time trade-offs can be obtained by operating on the granularity of the model. More importantly we offer a closed form conservative bound for the probability of a deadline miss. Our experiments reveal that the bound remains reasonably close to the experimental probability in one real-time application of practical interest. When this bound is used for the optimisation of the overall Quality of Service for a set of tasks sharing the CPU, it produces a good sub-optimal solution in a small amount of time.Comment: IEEE Transactions on Parallel and Distributed Systems, Volume:27, Issue: 3, March 201

    Autonomous Algorithms for Centralized and Distributed Interference Coordination: A Virtual Layer Based Approach

    Get PDF
    Interference mitigation techniques are essential for improving the performance of interference limited wireless networks. In this paper, we introduce novel interference mitigation schemes for wireless cellular networks with space division multiple access (SDMA). The schemes are based on a virtual layer that captures and simplifies the complicated interference situation in the network and that is used for power control. We show how optimization in this virtual layer generates gradually adapting power control settings that lead to autonomous interference minimization. Thereby, the granularity of control ranges from controlling frequency sub-band power via controlling the power on a per-beam basis, to a granularity of only enforcing average power constraints per beam. In conjunction with suitable short-term scheduling, our algorithms gradually steer the network towards a higher utility. We use extensive system-level simulations to compare three distributed algorithms and evaluate their applicability for different user mobility assumptions. In particular, it turns out that larger gains can be achieved by imposing average power constraints and allowing opportunistic scheduling instantaneously, rather than controlling the power in a strict way. Furthermore, we introduce a centralized algorithm, which directly solves the underlying optimization and shows fast convergence, as a performance benchmark for the distributed solutions. Moreover, we investigate the deviation from global optimality by comparing to a branch-and-bound-based solution.Comment: revised versio

    Dynamic, Latency-Optimal vNF Placement at the Network Edge

    Get PDF
    Future networks are expected to support low-latency, context-aware and user-specific services in a highly flexible and efficient manner. One approach to support emerging use cases such as, e.g., virtual reality and in-network image processing is to introduce virtualized network functions (vNF)s at the edge of the network, placed in close proximity to the end users to reduce end-to-end latency, time-to-response, and unnecessary utilisation in the core network. While placement of vNFs has been studied before, it has so far mostly focused on reducing the utilisation of server resources (i.e., minimising the number of servers required in the network to run a specific set of vNFs), and not taking network conditions into consideration such as, e.g., end-to-end latency, the constantly changing network dynamics, or user mobility patterns. In this paper, we formulate the Edge vNF placement problem to allocate vNFs to a distributed edge infrastructure, minimising end-to-end latency from all users to their associated vNFs. We present a way to dynamically re-schedule the optimal placement of vNFs based on temporal network-wide latency fluctuations using optimal stopping theory. We then evaluate our dynamic scheduler over a simulated nation-wide backbone network using real-world ISP latency characteristics. We show that our proposed dynamic placement scheduler minimises vNF migrations compared to other schedulers (e.g., periodic and always-on scheduling of a new placement), and offers Quality of Service guarantees by not exceeding a maximum number of latency violations that can be tolerated by certain applications

    Radio Resource Management Optimization For Next Generation Wireless Networks

    Get PDF
    The prominent versatility of today’s mobile broadband services and the rapid advancements in the cellular phones industry have led to a tremendous expansion in the wireless market volume. Despite the continuous progress in the radio-access technologies to cope with that expansion, many challenges still remain that need to be addressed by both the research and industrial sectors. One of the many remaining challenges is the efficient allocation and management of wireless network resources when using the latest cellular radio technologies (e.g., 4G). The importance of the problem stems from the scarcity of the wireless spectral resources, the large number of users sharing these resources, the dynamic behavior of generated traffic, and the stochastic nature of wireless channels. These limitations are further tightened as the provider’s commitment to high quality-of-service (QoS) levels especially data rate, delay and delay jitter besides the system’s spectral and energy efficiencies. In this dissertation, we strive to solve this problem by presenting novel cross-layer resource allocation schemes to address the efficient utilization of available resources versus QoS challenges using various optimization techniques. The main objective of this dissertation is to propose a new predictive resource allocation methodology using an agile ray tracing (RT) channel prediction approach. It is divided into two parts. The first part deals with the theoretical and implementational aspects of the ray tracing prediction model, and its validation. In the second part, a novel RT-based scheduling system within the evolving cloud radio access network (C-RAN) architecture is proposed. The impact of the proposed model on addressing the long term evolution (LTE) network limitations is then rigorously investigated in the form of optimization problems. The main contributions of this dissertation encompass the design of several heuristic solutions based on our novel RT-based scheduling model, developed to meet the aforementioned objectives while considering the co-existing limitations in the context of LTE networks. Both analytical and numerical methods are used within this thesis framework. Theoretical results are validated with numerical simulations. The obtained results demonstrate the effectiveness of our proposed solutions to meet the objectives subject to limitations and constraints compared to other published works

    Resource Allocation in Uplink Long Term Evolution

    Get PDF
    One of the most crucial goals of future cellular systems is to minimize transmission power while increasing system performance. This master thesis work presents two channel-queue-aware scheduling schemes to allocate channels among active users in uplink LTE. Transmission power, packet delays and data rates are three of the most important criteria critically affecting the resource allocation designs. Therefore, each of these two scheduling algorithms proposes a practical method that assigns resources in such a way so as to optimally maximize data rate and minimize transmission power and packet delays while ensuring the QoS requirements. After converting the resource allocation problem into an optimization problem, the objective function and associated constraints are derived. Due to the contiguity constraint, which is imposed by SC-FDMA in uplink LTE, binary integer programming is employed to solve the optimization problem. Also the heuristic algorithms that approximate optimal schemes are presented to decrease the algorithm complexity
    • …
    corecore