10 research outputs found

    Scalable Spectrum Allocation for Large Networks Based on Sparse Optimization

    Full text link
    Joint allocation of spectrum and user association is considered for a large cellular network. The objective is to optimize a network utility function such as average delay given traffic statistics collected over a slow timescale. A key challenge is scalability: given nn Access Points (APs), there are O(2n)O(2^n) ways in which the APs can share the spectrum. The number of variables is reduced from O(2n)O(2^n) to O(nk)O(nk), where kk is the number of users, by optimizing over local overlapping neighborhoods, defined by interference conditions, and by exploiting the existence of sparse solutions in which the spectrum is divided into k+1k+1 segments. We reformulate the problem by optimizing the assignment of subsets of active APs to those segments. An 0\ell_0 constraint enforces a one-to-one mapping of subsets to spectrum, and an iterative (reweighted 1\ell_1) algorithm is used to find an approximate solution. Numerical results for a network with 100 APs serving several hundred users show the proposed method achieves a substantial increase in total throughput relative to benchmark schemes.Comment: Submitted to the IEEE International Symposium on Information Theory (ISIT), 201

    Distributed Linear Precoder Optimization and Base Station Selection for an Uplink Heterogeneous Network

    No full text

    D13.2 Techniques and performance analysis on energy- and bandwidth-efficient communications and networking

    Get PDF
    Deliverable D13.2 del projecte europeu NEWCOM#The report presents the status of the research work of the various Joint Research Activities (JRA) in WP1.3 and the results that were developed up to the second year of the project. For each activity there is a description, an illustration of the adherence to and relevance with the identified fundamental open issues, a short presentation of the main results, and a roadmap for the future joint research. In the Annex, for each JRA, the main technical details on specific scientific activities are described in detail.Peer ReviewedPostprint (published version

    Channel Access Management for Massive Cellular IoT Applications

    Get PDF
    As part of the steps taken towards improving the quality of life, many of everyday life activities as well as technological advancements are relying more and more on smart devices. In the future, it is expected that every electric device will be a smart device that can be connected to the internet. This gives rise to the new network paradigm known as the massive cellular IoT, where a large number of simple battery powered heterogeneous devices are collectively working for the betterment of humanity in all aspects. However, different from the traditional cellular based communication networks, IoT applications produce uplink-heavy data traffic that is composed of a large number of small data packets with different quality of service (QoS) requirements. These unique characteristics pose as a challenge to the current cellular channel access process and, hence, new and revolutionary access mechanisms are much needed. These access mechanisms need to be cost-effective, enable the support of massive number of devices, scalable, practical, and energy and radio resource efficient. Furthermore, due to the low computational capabilities of the devices, they cannot handle heavy networking intelligence and, thus, the designed channel access should be simple and light. Accordingly, in this research, we evaluate the suitability of the current channel access mechanism for massive applications and propose an energy efficient and resource preserving clustering and data aggregation solution. The proposed solution is tailored to the needs of future IoT applications. First, we recognize that for many anticipated cellular IoT applications, providing energy efficient and delay-aware access is crucial. However, in cellular networks, before devices transmit their data, they use a contention-based association protocol, known as random access channel procedure (RACH), which introduces extensive access delays and energy wastage as the number of contending devices increases. Modeling the performance of the RACH protocol is a challenging task due to the complexity of uplink transmission that exhibits a wide range of interference components; nonetheless, it is an essential process that helps determine the applicability of cellular IoT communication paradigm and shed light on the main challenges. Consequently, we develop a novel mathematical framework based on stochastic geometry to evaluate the RACH protocol and identify its limitations in the context of cellular IoT applications with a massive number of devices. To do so, we study the traditional cellular association process and establish a mathematical model for its association success probability. The model accounts for device density, spatial characteristics of the network, power control employed, and mutual interference among the devices. Our analysis and results highlight the shortcomings of the RACH protocol and give insights into the potentials brought on by employing power control techniques. Second, based on the analysis of the RACH procedure, we determine that, as the number of devices increases, the contention over the limited network radio resources increases, leading to network congestion. Accordingly, to avoid network congestion while supporting a large number of devices, we propose to use node clustering and data aggregation. As the number of supported devices increases and their QoS requirements become vast, optimizing node clustering and data aggregation processes becomes critical to be able to handle the many trade-offs that arise among different network performance metrics. Furthermore, for cost effectiveness, we propose that the data aggregator nodes be cellular devices and thus it is desirable to keep the number of aggregators to minimum such that we avoid congesting the RACH channel, while maximizing the number of successfully supported devices. Consequently, to tackle these issues, we explore the possibility of combining data aggregation and non-orthogonal multiple access (NOMA) where we propose a novel two-hop NOMA-enabled network architecture. Concepts from queuing theory and stochastic geometry are jointly exploited to derive mathematical expressions for different network performance metrics such as coverage probability, two-hop access delay, and the number of served devices per transmission frame. The established models characterize relations among various network metrics, and hence facilitate the design of two-stage transmission architecture. Numerical results demonstrate that the proposed solution improves the overall access delay and energy efficiency as compared to traditional OMA-based clustered networks. Last, we recognize that under the proposed two-hop network architecture, devices are subject to access point association decisions, i.e., to which access point a device associates plays a major role in determining the overall network performance and the perceived service by the devices. Accordingly, in the third part of the work, we consider the optimization of the two-hop network from the point of view of user association such that the number of QoS satisfied devices is maximized while minimizing the overall device energy consumption. We formulate the problem as a joint access point association, resources utilization, and energy efficient communication optimization problem that takes into account various networking factors such as the number of devices, number of data aggregators, number of available resource units, interference, transmission power limitation of the devices, aggregator transmission performance, and channel conditions. The objective is to show the usefulness of data aggregation and shed light on the importance of network design when the number of devices is massive. We propose a coalition game theory based algorithm, PAUSE, to transform the optimization problem into a simpler form that can be successfully solved in polynomial time. Different network scenarios are simulated to showcase the effectiveness of PAUSE and to draw observations on cost effective data aggregation enabled two-hop network design

    Distributed radio resource allocation in wireless heterogeneous networks

    Get PDF
    This dissertation studies the problem of resource allocation in the radio access network of heterogeneous small-cell networks (HetSNets). A HetSNet is constructed by introducing smallcells(SCs) to a geographical area that is served by a well-structured macrocell network. These SCs reuse the frequency bands of the macro-network and operate in the interference-limited region. Thus, complex radio resource allocation schemes are required to manage interference and improve spectral efficiency. Both centralized and distributed approaches have been suggested by researchers to solve this problem. This dissertation follows the distributed approach under the self-organizing networks (SONs) paradigm. In particular, it develops game-theoretic and learning-theoretic modeling, analysis, and algorithms. Even though SONs may perform subpar to a centralized optimal controller, they are highly scalable and fault-tolerant. There are many facets to the problem of wireless resource allocation. They vary by the application, solution, methodology, and resource type. Therefore, this thesis restricts the treatment to four subproblems that were chosen due to their significant impact on network performance and suitability to our interests and expertise. Game theory and mechanism design are the main tools used since they provide a sufficiently rich environment to model the SON problem. Firstly, this thesis takes into consideration the problem of uplink orthogonal channel access in a dense cluster of SCs that is deployed in a macrocell service area. Two variations of this problem are modeled as noncooperative Bayesian games and the existence of pure-Bayesian Nash symmetric equilibria are demonstrated. Secondly, this thesis presents the generalized satisfaction equilibrium (GSE) for games in satisfaction-form. Each wireless agent has a constraint to satisfy and the GSE is a mixed-strategy profile from which no unsatisfied agent can unilaterally deviate to satisfaction. The objective of the GSE is to propose an alternative equilibrium that is designed specifically to model wireless users. The existence of the GSE, its computational complexity, and its performance compared to the Nash equilibrium are discussed. Thirdly, this thesis introduces verification mechanisms for dynamic self-organization of Wireless access networks. The main focus of verification mechanisms is to replace monetary transfers that are prevalent in current research. In the wireless environment particular private information of the wireless agents, such as block error rate and application class, can be verified at the access points. This verification capability can be used to threaten false reports with backhaul throttling. The agents then learn the truthful equilibrium over time by observing the rewards and punishments. Finally, the problem of admission control in the interfering-multiple access channel with rate constraints is addressed. In the incomplete information setting, with compact convex channel power gains, the resulting Bayesian game possesses at least one pureBayesian Nash equilibrium in on-off threshold strategies. The above-summarized results of this thesis demonstrate that the HetSNets are amenable to self-organization, albeit with adapted incentives and equilibria to fit the wireless environment. Further research problems to expand these results are identified at the end of this document

    Contribución a la planificación sistémica de redes móviles 4G

    Get PDF
    El objetivo de esta Tesis consiste en el diseño, implementación y prueba de algoritmos, tanto convencionales –utilizados actualmente en la industria– como otros novedosos basados en Computación Evolutiva, que constituyan una contribución novedosa a la planificación sistémica de redes móviles LTE. En particular, se centra en el dimensionamiento estratégico de la red de acceso de LTE, debido a que constituye aproximadamente el 60% de la inversión total, e incluso una parte más elevada de los gastos OPEX (operating expense). La Tesis establece una novedosa propuesta para el problema de asignación o asociación de usuarios a eNBs (evolved NBs) en LTE. La contribución es este aspecto es doble: por un lado, se plantea un nuevo método para asociar N_U usuarios a N_B eNBs en redes LTE. Consiste éste en modelar la asociación usuario-eNB como un problema de optimización combinatoria en el que la función a minimizar es una métrica novedosa conocida como “Tiempo de descarga total del sistema” –Download Time of the complete System (DTS). La minimización de DTS se traduce en una asignación de usuarios a celdas más eficiente que la que se consigue con métodos convencionales como el basado en maximizar CQI (Channel Quality Indicator) o en balanceo de carga (Load Balancing –LB–). Permite la asignación de usuarios desde celdas que, de otra forma, estarían sobrecargadas a otras con menos carga. Esto tiene un doble beneficio, tanto para el operador como para los usuarios. Por una parte, ayudar al operador de red a utilizar sus recursos de una manera más equilibrada y rentable. Por otra parte, la estrategia propuesta reduce el tiempo de descarga para la mayoría de los usuarios, e introduce cierta equidad en el reparto de recursos. La minimización de DTS es un problema de gran complejidad computacional. Ésta es justamente la razón que ha motivado la segunda contribución de esta Tesis: abordar el problema de la minimización de DTS mediante un algoritmo evolutivo (EA). El aspecto más interesante del EA propuesto es la forma en la que se realiza la codificación de las soluciones (asignaciones usuario-celda) candidatas. El cromosoma es un vector de dimensión N_U en el que cada elemento representa un usuario. El elemento en la posición j contiene cierta información sobre el usuario u_j. Esa información es un número entero que representa a qué eNB de los N_U disponibles se ha asignado dicho usuario. Los operadores de mutación, cruce y selección se han diseñado para que puedan trabajar con esta codificación. El operador de cruce, en particular, es un torneo de todos contra todos. El otro aspecto novedoso de la implementación del algoritmo evolutivo propuesto se encuentra en la población inicial. Como se tiene información de una solución sub-óptima del problema (la proporcionada por el método convencional basado en CQI –que asigna un usuario al eNB para el cual tiene mejor CQI–), se incluye ésta en la población inicial, y el resto de los individuos se genera, básicamente, aplicando los operadores de mutación y cruce sobre esa solución. En cualquier caso, la solución encontrada (asociación de cada usuario a un eNB) es mejor (menor DTS) que la asignación realizada con métodos convencionales. El segundo conjunto de contribuciones de la Tesis consiste en el diseño de una herramienta de planificación estratégica LTE, centrada en el proceso de dimensionado de red en un entorno multiusuario y multiservicio, y en la que se ha integrado en módulo EA de asociación usuario-eNB. Mediante esta integración, dicha herramienta, permite realizar comparativas entre el enfoque propuesto de asignación en base al EA y otros métodos convencionales, integrados en la herramienta, como los basados en CQI y en LB. En concreto, la herramienta tiene (1) una parametrización sencilla y eficaz de los múltiples parámetros de entrada y de la ubicación inicial de los eNB, que (2) permite simular un entorno multi-servicio y multi-usuario, empleando (3) diferentes algoritmos de asociación usuario-eNB –como el propuesto en la Tesis– y (4) varios algoritmos de scheduling, de forma que (5) se garantiza el cumplimiento del requisito de Velocidad de Descarga Mínima de cada servicio. Para cumplir los requisitos (1)–(5), la herramienta calcula la velocidad media de los servicios ofertados, teniendo en cuenta los tiempos de descarga de cada uno de los usuarios. Si con el número de eNB, calculados previamente, se cumple el requisito de velocidad demandada por los distintos servicios simulados, entonces se dará́ por válido dicho valor. En caso de incumplimiento, se añaden eNBs de forma iterativa hasta cumplir el requisito anterior. Ninguna de las herramientas disponibles en el mercado, tanto comerciales como basadas en software libre, es capaz de cumplir estos requisitos. Para cuantificar en qué medida la herramienta desarrollada, en general, y nuestra propuesta de asignación usuario-celda, en particular, son útiles en el dimensionamiento de LTE, se han llevado a cabo un conjunto amplio y variado de simulaciones, en diferentes escenarios realistas, tanto urbano como urbano denso, y se han comparado con dos métodos de asignación usuario-celda convencionales basados en CQI y en LB. Estos experimentos muestran que el método propuesto supera claramente a los convencionales, especialmente en áreas urbanas y urbana densa (entornos donde la asignación es más crítica en términos de capacidad) con macro-celdas de LTE, e incluso, en escenarios que modelan redes heterogéneas y ultra densas. La herramienta y los algoritmos que se proponen en esta tesis pueden ayudar a los operadores a mejorar sus diseños previos al despliegue y a cuantificar la mejora potencial que se conseguiría en la red al añadir un nuevo nodo. Los resultados de la investigación realizada en la Tesis se han publicando en varias revistas y congresos internacionales
    corecore