249 research outputs found

    Review on Radio Resource Allocation Optimization in LTE/LTE-Advanced using Game Theory

    Get PDF
    Recently, there has been a growing trend toward ap-plying game theory (GT) to various engineering fields in order to solve optimization problems with different competing entities/con-tributors/players. Researches in the fourth generation (4G) wireless network field also exploited this advanced theory to overcome long term evolution (LTE) challenges such as resource allocation, which is one of the most important research topics. In fact, an efficient de-sign of resource allocation schemes is the key to higher performance. However, the standard does not specify the optimization approach to execute the radio resource management and therefore it was left open for studies. This paper presents a survey of the existing game theory based solution for 4G-LTE radio resource allocation problem and its optimization

    Resource and power management in next generation networks

    Get PDF
    The limits of today’s cellular communication systems are constantly being tested by the exponential increase in mobile data traffic, a trend which is poised to continue well into the next decade. Densification of cellular networks, by overlaying smaller cells, i.e., micro, pico and femtocells, over the traditional macrocell, is seen as an inevitable step in enabling future networks to support the expected increases in data rate demand. Next generation networks will most certainly be more heterogeneous as services will be offered via various types of points of access (PoAs). Indeed, besides the traditional macro base station, it is expected that users will also be able to access the network through a wide range of other PoAs: WiFi access points, remote radio-heads (RRHs), small cell (i.e., micro, pico and femto) base stations or even other users, when device-to-device (D2D) communications are supported, creating thus a multi-tiered network architecture. This approach is expected to enhance the capacity of current cellular networks, while patching up potential coverage gaps. However, since available radio resources will be fully shared, the inter-cell interference as well as the interference between the different tiers will pose a significant challenge. To avoid severe degradation of network performance, properly managing the interference is essential. In particular, techniques that mitigate interference such Inter Cell Interference Coordination (ICIC) and enhanced ICIC (eICIC) have been proposed in the literature to address the issue. In this thesis, we argue that interference may be also addressed during radio resource scheduling tasks, by enabling the network to make interference-aware resource allocation decisions. Carrier aggregation technology, which allows the simultaneous use of several component carriers, on the other hand, targets the lack of sufficiently large portions of frequency spectrum; a problem that severely limits the capacity of wireless networks. The aggregated carriers may, in general, belong to different frequency bands, and have different bandwidths, thus they also may have very different signal propagation characteristics. Integration of carrier aggregation in the network introduces additional tasks and further complicates interference management, but also opens up a range of possibilities for improving spectrum efficiency in addition to enhancing capacity, which we aim to exploit. In this thesis, we first look at the resource allocation in problem in dense multitiered networks with support for advanced features such as carrier aggregation and device-to-device communications. For two-tiered networks with D2D support, we propose a centralised, near optimal algorithm, based on dynamic programming principles, that allows a central scheduler to make interference and traffic-aware scheduling decisions, while taking into consideration the short-lived nature of D2D links. As the complexity of the central scheduler increases exponentially with the number of component carriers, we further propose a distributed heuristic algorithm to tackle the resource allocation problem in carrier aggregation enabled dense networks. We show that the solutions we propose perform significantly better than standard solutions adopted in cellular networks such as eICIC coupled with Proportional Fair scheduling, in several key metrics such as user throughput, timely delivery of content and spectrum and energy efficiency, while ensuring fairness for backward compatible devices. Next, we investigate the potentiality to enhance network performance by enabling the different nodes of the network to reduce and dynamically adjust the transmit power of the different carriers to mitigate interference. Considering that the different carriers may have different coverage areas, we propose to leverage this diversity, to obtain high-performing network configurations. Thus, we model the problem of carrier downlink transmit power setting, as a competitive game between teams of PoAs, which enables us to derive distributed dynamic power setting algorithms. Using these algorithms we reach stable configurations in the network, known as Nash equilibria, which we show perform significantly better than fixed power strategies coupled with eICIC

    Stable Matching based Resource Allocation for Service Provider\u27s Revenue Maximization in 5G Networks

    Get PDF
    5G technology is foreseen to have a heterogeneous architecture with the various computational capability, and radio-enabled service providers (SPs) and service requesters (SRs), working altogether in a cellular model. However, the coexistence of heterogeneous network model spawns several research challenges such as diverse SRs with uneven service deadlines, interference management, and revenue maximization of non-uniform computational capacities enabled SPs. Thus, we propose a coexistence of heterogeneous SPs and SRs enabled cellular 5G network and formulate the SPs\u27 revenue maximization via resource allocation, considering different kinds of interference, data rate, and latency altogether as an optimization problem and further propose a distributed many-to-many stable matching-based solution. Moreover, we offer an adaptive stable matching based distributed algorithm to solve the formulated problem in a dynamic network model. Through extensive theoretical and simulation analysis, we have shown the effect of different parameters on the resource allocation objectives and achieves 94 percent of optimum network performance

    Allocation des ressources fondée sur la qualité du canal pour la voie descendante des systèmes LTE

    Get PDF
    This research takes place in the context of Private Mobile Radio networks evolution which aims at designing a new LTE based PMR technology dedicated to public security services. As the frequency bands dedicated to this service is scarce and the need of public safety forces is different, we have revisited the Resource Allocation problem in this thesis with two main objectives: designing new allocation algorithms which outperform the spectrum efficiency and serving fairly the users instead of maximizing the global network throughput.This thesis proposes new Resource Block (RB) allocation strategies in LTE downlink systems. Instead of the well-known resource allocation algorithms, which work on the condition that the RB capacity is already estimated, our RB allocation schemes can improve the potential of the channel capacity, using Beamforming cooperation and game-theoretical problems1. With the MIMO (Multiple-Input-Multiple-output) antennas, the Beamforming technique improves the received signal in order to increase the SINR (Signal-to-Interference-plus-Noise-Ratio), but the improved signal may also influence the inter-cell interference in the neighbouring cells. As inter-cell interference is the main interference in the OFDMA system, a smart scheduling can choose UEs (User Equipment) in adjacent cells to control interference increment caused by Beamforming.In traditional methods, the scheduler allocates RBs to UEs depending on the RB capacities and other parameters, the system then applies the Beamforming technique to these chosen UEs. After the Beamforming, the RB capacity varies but the scheduler keeps the same allocation.Our scheme allocates the RBs and chooses Beamforming vectors at the same time to enhance the performance of the Beamforming technique. It increases the average throughput by increasing the RB’s average capacity. Because more parameters are taken into account, the complexity also increases exponentially. In the thesis we find an iterative method to reduce the complexity. From the simulations, our iterative method also has good performance and improves more than 10% of throughput on the cell edge.2. In contrast to the performance first algorithms, game theoretic allocation schemes maximize the UEs’ utility function from the economical point of view. The NBS (Nash Bargaining Solution) offers a Pareto optimal solution for the utility function.The traditional NBS allocation in an OFDMA system is to optimize the subcarrier allocation at each time slot, but in the OFDMA system, the subcarriers are composed of Resource Blocks (RB) in time series. We propose an RB NBS approach, which is more efficient than the existing subcarrier NBS allocation scheme.We analyze the fast-fading channels and compare them without the path-loss influence. Because of the great path-loss in cell edge, the edge UE always has lower RB capacity than the cell center UE. Our idea is to bring in a compensating factor to overcome this path-loss influence, and the compensating factors are carefully chosen to maximize the NBS function. However, the computation of these factors has a high complexity and we develop four approximated solutions which give same performance and accuracy. The performance evaluation confirms that our method and its approximated solutions are able to spread resources fairly over the entire cell.La recherche effectuée dans cette thèse a pour cadre les réseaux radio privés dédiés aux forces de sécurité civile. En effet, doté actuellement d’un service bande étroite, ils doivent évoluer pour faire face à de nouveaux besoins comme la vidéo ou le multimédia. L’objectif est donc d’adapter la technologie LTE aux contraintes et propriétés de ces réseaux particulier. Ainsi, le nombre d’utilisateurs est limité mais le service doit toujours être disponible et des priorités peuvent être mises en œuvre.Dans ce contexte, l’allocation des ressources de communication est un problème important avec des prérequis différents des réseaux d’opérateurs. Notre conception d’algorithmes d’allocation a donc été menée avec deux objectifs principaux : maximiser l'efficacité du spectre et servir équitablement les utilisateurs au lieu de maximiser le débit global du réseau.Cette thèse propose des nouvelles stratégies de l’allocation des blocs de ressources (RB) dans les systèmes LTE sur le lien descendant. Au contraire des algorithmes classiques d'allocation des ressources qui se basent sur la capacité de RB déjà estimée, nos stratégies d’allocation des RB cherchent à améliorer le débit utilisateur, en utilisant la coopération à base de Beamforming et les modèles de la théorie des jeux.1. L’interférence inter-cellulaire est le principal problème des systèmes OFDMA. Grâce aux antennes MIMO (Multiple-Input-Multiple-Output), la technique de Beamforming améliore le signal reçu afin d'augmenter le SINR (Signal-to-Interference-plus-Noise-Ratio), mais le signal amélioré peut également influencer l’interférence inter-cellulaire dans les cellules voisines. Dans les méthodes traditionnelles, le contrôleur alloue les RBs aux UEs (User Equipement) en fonction de la capacité des RB et d'autres paramètres, le système applique alors la technique de Beamforming aux équipements utilisateurs choisis. Après la formation des faisceaux, la capacité des RB varie mais l'ordonnanceur conserve la même allocation. Au contraire, notre système alloue les RBs et choisit les vecteurs de Beamforming conjointement pour améliorer les performances de la technique de Beamforming. Il accroît le débit moyen en augmentant la capacité moyenne du RB. Comme plusieurs paramètres sont pris en compte, la complexité augmente exponentiellement aussi. Dans cette thèse, nous avons développé une méthode itérative pour réduire la complexité. Notamment, elle améliore de plus de 10% le débit des utilisateurs en bord de la cellule.2. Contrairement aux performances des algorithmes qui maximisent le débit global du réseau, les approches d’allocation de ressources à base de théorie des jeux maximisent la fonction d'utilité des UE du point de vue économique. Si le modèle a une solution NBS (Nash Bargaining Solution) il offre une solution optimale de Pareto de la fonction d'utilité. L’allocation traditionnelle est d'optimiser l'allocation de sous-porteuses à chaque intervalle de temps, mais dans le système OFDMA, les sous-porteuses sont formées de RBs dans le temps. Nous proposons une approche RB NBS, qui est plus efficace que les schémas existants. Nous analysons les canaux de fast-fading et les comparons sans l'influence de l’atténuation. En raison de la grande atténuation de signal en bordure de la cellule, l’utilisateur a toujours des RB de plus faible capacité que celui au centre de la cellule. Notre idée est d'ajouter un facteur de compensation pour combattre l'influence de la perte de propagation. Les facteurs de compensation sont soigneusement choisis afin de maximiser la fonction NBS. Cependant, le calcul de ces facteurs a une grande complexité et nous développons quatre solutions approchées qui donnent les mêmes performances avec une bonne précision. L'évaluation des performances de notre approche confirme que notre méthode et ses solutions approchées sont capables de partager équitablement les ressources sur toute la cellule

    Prediction-based techniques for the optimization of mobile networks

    Get PDF
    Mención Internacional en el título de doctorMobile cellular networks are complex system whose behavior is characterized by the superposition of several random phenomena, most of which, related to human activities, such as mobility, communications and network usage. However, when observed in their totality, the many individual components merge into more deterministic patterns and trends start to be identifiable and predictable. In this thesis we analyze a recent branch of network optimization that is commonly referred to as anticipatory networking and that entails the combination of prediction solutions and network optimization schemes. The main intuition behind anticipatory networking is that knowing in advance what is going on in the network can help understanding potentially severe problems and mitigate their impact by applying solution when they are still in their initial states. Conversely, network forecast might also indicate a future improvement in the overall network condition (i.e. load reduction or better signal quality reported from users). In such a case, resources can be assigned more sparingly requiring users to rely on buffered information while waiting for the better condition when it will be more convenient to grant more resources. In the beginning of this thesis we will survey the current anticipatory networking panorama and the many prediction and optimization solutions proposed so far. In the main body of the work, we will propose our novel solutions to the problem, the tools and methodologies we designed to evaluate them and to perform a real world evaluation of our schemes. By the end of this work it will be clear that not only is anticipatory networking a very promising theoretical framework, but also that it is feasible and it can deliver substantial benefit to current and next generation mobile networks. In fact, with both our theoretical and practical results we show evidences that more than one third of the resources can be saved and even larger gain can be achieved for data rate enhancements.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Albert Banchs Roca.- Presidente: Pablo Serrano Yañez-Mingot.- Secretario: Jorge Ortín Gracia.- Vocal: Guevara Noubi

    Cellular networks for smart grid communication

    Get PDF
    The next-generation electric power system, known as smart grid, relies on a robust and reliable underlying communication infrastructure to improve the efficiency of electricity distribution. Cellular networks, e.g., LTE/LTE-A systems, appear as a promising technology to facilitate the smart grid evolution. Their inherent performance characteristics and well-established ecosystem could potentially unlock unprecedented use cases, enabling real-time and autonomous distribution grid operations. However, cellular technology was not originally intended for smart grid communication, associated with highly-reliable message exchange and massive device connectivity requirements. The fundamental differences between smart grid and human-type communication challenge the classical design of cellular networks and introduce important research questions that have not been sufficiently addressed so far. Motivated by these challenges, this doctoral thesis investigates novel radio access network (RAN) design principles and performance analysis for the seamless integration of smart grid traffic in future cellular networks. Specifically, we focus on addressing the fundamental RAN problems of network scalability in massive smart grid deployments and radio resource management for smart grid and human-type traffic. The main objective of the thesis lies on the design, analysis and performance evaluation of RAN mechanisms that would render cellular networks the key enabler for emerging smart grid applications. The first part of the thesis addresses the radio access limitations in LTE-based networks for reliable and scalable smart grid communication. We first identify the congestion problem in LTE random access that arises in large-scale smart grid deployments. To overcome this, a novel random access mechanism is proposed that can efficiently support real-time distribution automation services with negligible impact on the background traffic. Motivated by the stringent reliability requirements of various smart grid operations, we then develop an analytical model of the LTE random access procedure that allows us to assess the performance of event-based monitoring traffic under various load conditions and network configurations. We further extend our analysis to include the relation between the cell size and the availability of orthogonal random access resources and we identify an additional challenge for reliable smart grid connectivity. To this end, we devise an interference- and load-aware cell planning mechanism that enhances reliability in substation automation services. Finally, we couple the problem of state estimation in wide-area monitoring systems with the reliability challenges in information acquisition. Using our developed analytical framework, we quantify the impact of imperfect communication reliability in the state estimation accuracy and we provide useful insights for the design of reliability-aware state estimators. The second part of the thesis builds on the previous one and focuses on the RAN problem of resource scheduling and sharing for smart grid and human-type traffic. We introduce a novel scheduler that achieves low latency for distribution automation traffic while resource allocation is performed in a way that keeps the degradation of cellular users at a minimum level. In addition, we investigate the benefits of Device-to-Device (D2D) transmission mode for event-based message exchange in substation automation scenarios. We design a joint mode selection and resource allocation mechanism which results in higher data rates with respect to the conventional transmission mode via the base station. An orthogonal resource partition scheme between cellular and D2D links is further proposed to prevent the underutilization of the scarce cellular spectrum. The research findings of this thesis aim to deliver novel solutions to important RAN performance issues that arise when cellular networks support smart grid communication.Las redes celulares, p.e., los sistemas LTE/LTE-A, aparecen como una tecnología prometedora para facilitar la evolución de la próxima generación del sistema eléctrico de potencia, conocido como smart grid (SG). Sin embargo, la tecnología celular no fue pensada originalmente para las comunicaciones en la SG, asociadas con el intercambio fiable de mensajes y con requisitos de conectividad de un número masivo de dispositivos. Las diferencias fundamentales entre las comunicaciones en la SG y la comunicación de tipo humano desafían el diseño clásico de las redes celulares e introducen importantes cuestiones de investigación que hasta ahora no se han abordado suficientemente. Motivada por estos retos, esta tesis doctoral investiga los principios de diseño y analiza el rendimiento de una nueva red de acceso radio (RAN) que permita una integración perfecta del tráfico de la SG en las redes celulares futuras. Nos centramos en los problemas fundamentales de escalabilidad de la RAN en despliegues de SG masivos, y en la gestión de los recursos radio para la integración del tráfico de la SG con el tráfico de tipo humano. El objetivo principal de la tesis consiste en el diseño, el análisis y la evaluación del rendimiento de los mecanismos de las RAN que convertirán a las redes celulares en el elemento clave para las aplicaciones emergentes de las SGs. La primera parte de la tesis aborda las limitaciones del acceso radio en redes LTE para la comunicación fiable y escalable en SGs. En primer lugar, identificamos el problema de congestión en el acceso aleatorio de LTE que aparece en los despliegues de SGs a gran escala. Para superar este problema, se propone un nuevo mecanismo de acceso aleatorio que permite soportar de forma eficiente los servicios de automatización de la distribución eléctrica en tiempo real, con un impacto insignificante en el tráfico de fondo. Motivados por los estrictos requisitos de fiabilidad de las diversas operaciones en la SG, desarrollamos un modelo analítico del procedimiento de acceso aleatorio de LTE que nos permite evaluar el rendimiento del tráfico de monitorización de la red eléctrica basado en eventos bajo diversas condiciones de carga y configuraciones de red. Además, ampliamos nuestro análisis para incluir la relación entre el tamaño de celda y la disponibilidad de recursos de acceso aleatorio ortogonales, e identificamos un reto adicional para la conectividad fiable en la SG. Con este fin, diseñamos un mecanismo de planificación celular que tiene en cuenta las interferencias y la carga de la red, y que mejora la fiabilidad en los servicios de automatización de las subestaciones eléctricas. Finalmente, combinamos el problema de la estimación de estado en sistemas de monitorización de redes eléctricas de área amplia con los retos de fiabilidad en la adquisición de la información. Utilizando el modelo analítico desarrollado, cuantificamos el impacto de la baja fiabilidad en las comunicaciones sobre la precisión de la estimación de estado. La segunda parte de la tesis se centra en el problema de scheduling y compartición de recursos en la RAN para el tráfico de SG y el tráfico de tipo humano. Presentamos un nuevo scheduler que proporciona baja latencia para el tráfico de automatización de la distribución eléctrica, mientras que la asignación de recursos se realiza de un modo que mantiene la degradación de los usuarios celulares en un nivel mínimo. Además, investigamos los beneficios del modo de transmisión Device-to-Device (D2D) en el intercambio de mensajes basados en eventos en escenarios de automatización de subestaciones eléctricas. Diseñamos un mecanismo conjunto de asignación de recursos y selección de modo que da como resultado tasas de datos más elevadas con respecto al modo de transmisión convencional a través de la estación base. Finalmente, se propone un esquema de partición de recursos ortogonales entre enlaces celulares y D2Postprint (published version

    Autonomous Component Carrier Selection for 4G Femtocells

    Get PDF
    corecore