148 research outputs found

    Marginal Productivity Indices and Linear Programming Relaxations for Dynamic Resource Allocation in Queueing Systems

    Get PDF
    Many problems concerning resource management in modern communication systems can be simplified to queueing models under Markovian assumptions. The computation of the optimal policy is however often hindered by the curse of dimensionality especially for models that support multiple traffic or job classes. The research focus naturally turns to computationally efficient bounds and high performance heuristics. In this thesis, we apply the indexability theory to the study of admission control of a single server queue and to the buffer sharing problem for a multi-class queueing system. Our main contributions are the following: we derive the Marginal Productivity Index (MPI) and give a sufficient indexability condition for the admission control model by viewing the buffer as the resource; we construct hierarchical Linear Programming (LP) relaxations for the buffer sharing problem and propose an MPI based heuristic with its performance evaluated by discrete event simulation. In our study, the admission control model is used as the building block for the MPI heuristic deployed for the buffer sharing problem. Our condition for indexability only requires that the reward function is concavelike. We also give the explicit non-recursive expression for the MPI calculation. We compare with the previous result of the indexability condition and the MPI for the admission control model that penalizes the rejection action. The study of hierarchical LP relaxations for the buffer sharing problem is based on the exact but intractable LP formulation of the continuous-time Markov Decision Process (MDP). The number of hierarchy levels is equal to the number of job classes. The last one in the hierarchy is exact and corresponds to the exponentially sized LP formulation of the MDP. The first order relaxation is obtained by relaxing the constraint that no buffer overflow may occur in any sample path to the constraint that the average buffer utilization does not exceed the available capacity. Based on the Lagrangian decomposition of the first order relaxation, we propose a heuristic policy based on the concept of MPI. Each one of the decomposed subproblems corresponds to the admission control model we described above. The link to the decomposed sub-problems is the Lagrangian multiplier for the relaxed buffer size constraint in the first order relaxation. Our simulation study indicates the near optimal performance of the heuristic in the (randomly generated) instances investigated

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    A simulation-based algorithm for solving the resource-assignment problem in satellite telecommunication networks

    Get PDF
    This paper proposes an heuristic for the scheduling of capacity requests and the periodic assignment of radio resources in geostationary (GEO) satellite networks with star topology, using the Demand Assigned Multiple Access (DAMA) protocol in the link layer, and Multi-Frequency Time Division Multiple Access (MF-TDMA) and Adaptive Coding and Modulation (ACM) in the physical layer.En este trabajo se propone una heurística para la programación de las solicitudes de capacidad y la asignación periódica de los recursos de radio en las redes de satélites geoestacionarios (GEO) con topología en estrella, con la demanda de acceso múltiple de asignación (DAMA) de protocolo en la capa de enlace, y el Multi-Frequency Time Division (Acceso múltiple por MF-TDMA) y codificación y modulación Adaptable (ACM) en la capa física.En aquest treball es proposa una heurística per a la programació de les sol·licituds de capacitat i l'assignació periòdica dels recursos de ràdio en les xarxes de satèl·lits geoestacionaris (GEO) amb topologia en estrella, amb la demanda d'accés múltiple d'assignació (DAMA) de protocol en la capa d'enllaç, i el Multi-Frequency Time Division (Accés múltiple per MF-TDMA) i codificació i modulació Adaptable (ACM) a la capa física

    Delay Bound: Fractal Traffic Passes through Network Servers

    Get PDF
    Delay analysis plays a role in real-time systems in computer communication networks. This paper gives our results in the aspect of delay analysis of fractal traffic passing through servers. There are three contributions presented in this paper. First, we will explain the reasons why conventional theory of queuing systems ceases in the general sense when arrival traffic is fractal. Then, we will propose a concise method of delay computation for hard real-time systems as shown in this paper. Finally, the delay computation of fractal traffic passing through severs is presented

    Recent Advances in Embedded Computing, Intelligence and Applications

    Get PDF
    The latest proliferation of Internet of Things deployments and edge computing combined with artificial intelligence has led to new exciting application scenarios, where embedded digital devices are essential enablers. Moreover, new powerful and efficient devices are appearing to cope with workloads formerly reserved for the cloud, such as deep learning. These devices allow processing close to where data are generated, avoiding bottlenecks due to communication limitations. The efficient integration of hardware, software and artificial intelligence capabilities deployed in real sensing contexts empowers the edge intelligence paradigm, which will ultimately contribute to the fostering of the offloading processing functionalities to the edge. In this Special Issue, researchers have contributed nine peer-reviewed papers covering a wide range of topics in the area of edge intelligence. Among them are hardware-accelerated implementations of deep neural networks, IoT platforms for extreme edge computing, neuro-evolvable and neuromorphic machine learning, and embedded recommender systems

    Performance modeling and control of web servers

    Get PDF
    This thesis deals with the task of modeling a web server and designing a mechanism that can prevent the web server from being overloaded. Four papers are presented. The first paper gives an M/G/1/K processor sharing model of a single web server. The model is validated against measurements ands imulations on the commonly usedw eb server Apache. A description is given on how to calculate the necessary parameters in the model. The second paper introduces an admission control mechanism for the Apache web server basedon a combination of queuing theory andcon trol theory. The admission control mechanism is tested in the laboratory, implemented as a stand-alone application in front of the web server. The third paper continues the work from the secondp aper by discussing stability. This time, the admission control mechanism is implemented as a module within the Apache source code. Experiments show the stability and settling time of the controller. Finally, the fourth paper investigates the concept of service level agreements for a web site. The agreements allow a maximum response time anda minimal throughput to be set. The requests are sorted into classes, where each class is assigneda weight (representing the income for the web site owner). Then an optimization algorithm is appliedso that the total profit for the web site during overload is maximized
    corecore