12 research outputs found

    Modeling and Analysis of Cellular Networks Using Stochastic Geometry: A Tutorial

    Get PDF
    This paper presents a tutorial on stochastic geometry (SG)-based analysis for cellular networks. This tutorial is distinguished by its depth with respect to wireless communication details and its focus on cellular networks. This paper starts by modeling and analyzing the baseband interference in a baseline single-tier downlink cellular network with single antenna base stations and universal frequency reuse. Then, it characterizes signal-to-interference-plus-noise-ratio and its related performance metrics. In particular, a unified approach to conduct error probability, outage probability, and transmission rate analysis is presented. Although the main focus of this paper is on cellular networks, the presented unified approach applies for other types of wireless networks that impose interference protection around receivers. This paper then extends the unified approach to capture cellular network characteristics (e.g., frequency reuse, multiple antenna, power control, etc.). It also presents numerical examples associated with demonstrations and discussions. To this end, this paper highlights the state-of-the-art research and points out future research directions

    Performance evaluation of future wireless networks: node cooperation and aerial networks

    Get PDF
    Perhaps future historians will only refer to this era as the \emph{information age}, and will recognize it as a paramount milestone in mankind progress. One of the main pillars of this age is the ability to transmit and communicate information effectively and reliably, where wireless radio technology became one of the most vital enablers for such communication. A growth in radio communication demand is notably accelerating in a never-resting pace, pausing a great challenge not only on service providers but also on researches and innovators to explore out-of-the-box technologies. These challenges are mainly related to providing faster data communication over seamless, reliable and cost efficient wireless network, given the limited availability of physical radio resources, and taking into consideration the environmental impact caused by the increasing energy consumption. Traditional wireless communication is usually deployed in a cellular manner, where fixed base stations coordinate radio resources and play the role of an intermediate data handler. The concept of cellular networks and hotspots is widely adopted as the current stable scheme of wireless communication. However in many situations this fixed infrastructure could be impaired with severe damages caused by natural disasters, or could suffer congestions and traffic blockage. In addition to the fact that in the current networks any mobile-to-mobile data sessions should pass through the serving base station that might cause unnecessary energy consumption. In order to enhance the performance and reliability of future wireless networks and to reduce its environmental footprint, we explore two complementary concepts: the first is node cooperation and the second is aerial networks. With the ability of wireless nodes to cooperate lays two main possible opportunities; one is the ability of the direct delivery of information between the communicating nodes without relaying traffic through the serving base station, thus reducing energy consumption and alleviating traffic congestion. A second opportunity would be that one of the nodes helps a farther one by relaying its traffic towards the base station, thus extending network coverage and reliability. Both schemes can introduce significant energy saving and can enhance the overall availability of wireless networks in case of natural disasters. In addition to node cooperation, a complementary technology to explore is the \emph{aerial networks} where base stations are airborne on aerial platforms such as airships, UAVs or blimps. Aerial networks can provide a rapidly deployable coverage for remote areas or regions afflicted by natural disasters or even to patch surge traffic demand in public events. Where node cooperation can be implemented to complement both regular terrestrial coverage and to complement aerial networks. In this research, we explore these two complementary technologies, from both an experimental approach and from an analytic approach. From the experimental perspective we shed the light on the radio channel properties that is hosting terrestrial node cooperation and air-to-ground communication, namely we utilize both simulation results and practical measurements to formulate radio propagation models for device-to-device communication and for air-to-ground links. Furthermore we investigate radio spectrum availability for node cooperation in different urban environment, by conductive extensive mobile measurement survey. Within the experimental approach, we also investigate a novel concept of temporary cognitive femtocell network as an applied solution for public safety communication networks during the aftermath of a natural disaster. While from the analytical perspective, we utilize mathematical tools from stochastic geometry to formulate novel analytical methodologies, explaining some of the most important theoretical boundaries of the achievable enhancements in network performance promised by node cooperation. We start by determining the estimated coverage and rate received by mobile users from convectional cellular networks and from aerial platforms. After that we optimize this coverage and rate ensuring that relay nodes and users can fully exploit their coverage efficiently. We continue by analytically quantifying the cellular network performance during massive infrastructure failure, where some nodes play the role of low-power relays forming multi-hop communication links to assist farther nodes outside the reach of the healthy network coverage. In addition, we lay a mathematical framework for estimating the energy saving of a mediating relay assisting a pair of wireless devices, where we derive closed-form expressions for describing the geometrical zone where relaying is energy efficient. Furthermore, we introduce a novel analytic approach in analyzing the energy consumption of aerial-backhauled wireless nodes on ground fields through the assistance of an aerial base station, the novel mathematical framework is based on Mat\'{e}rn hard-core point process. Then we shed the light on the points interacting of these point processes quantifying their main properties. Throughout this thesis we relay on verifying the analytic results and formulas against computer simulations using Monte-Carlo analysis. We also present practical numerical examples to reflect the usefulness of the presented methodologies and results in real life scenarios. Most of the work presented in this dissertation was published in-part or as a whole in highly ranked peer-reviewed journals, conference proceedings, book chapters, or otherwise currently undergoing a review process. These publications are highlighted and identified in the course of this thesis. Finally, we wish the reader to enjoy exploring the journey of this thesis, and hope it will add more understanding to the promising new technologies of aerial networks and node cooperation

    Random Matrices for Information Processing – A Democratic Vision

    Get PDF

    Diversity Combining under Interference Correlation in Wireless Networks

    Get PDF
    A theoretical framework is developed for analyzing the performance of diversity combining under interference correlation. Stochastic models for different types of diversity combining and networks are presented and used for analysis. These models consider relevant system aspects such as network density, path loss, channel fading, number of antennas, and transmitter/receiver processing. Theoretical results are derived, performance comparisons are presented, and design insights are obtained

    Apprentissage statistique avec le processus ponctuel déterminantal

    Full text link
    Cette thèse aborde le processus ponctuel déterminantal, un modèle probabiliste qui capture la répulsion entre les points d’un certain espace. Celle-ci est déterminée par une matrice de similarité, la matrice noyau du processus, qui spécifie quels points sont les plus similaires et donc moins susceptibles de figurer dans un même sous-ensemble. Contrairement à la sélection aléatoire uniforme, ce processus ponctuel privilégie les sous-ensembles qui contiennent des points diversifiés et hétérogènes. La notion de diversité acquiert une importante grandissante au sein de sciences comme la médecine, la sociologie, les sciences forensiques et les sciences comportementales. Le processus ponctuel déterminantal offre donc une alternative aux traditionnelles méthodes d’échantillonnage en tenant compte de la diversité des éléments choisis. Actuellement, il est déjà très utilisé en apprentissage automatique comme modèle de sélection de sous-ensembles. Son application en statistique est illustrée par trois articles. Le premier article aborde le partitionnement de données effectué par un algorithme répété un grand nombre de fois sur les mêmes données, le partitionnement par consensus. On montre qu’en utilisant le processus ponctuel déterminantal pour sélectionner les points initiaux de l’algorithme, la partition de données finale a une qualité supérieure à celle que l’on obtient en sélectionnant les points de façon uniforme. Le deuxième article étend la méthodologie du premier article aux données ayant un grand nombre d’observations. Ce cas impose un effort computationnel additionnel, étant donné que la sélection de points par le processus ponctuel déterminantal passe par la décomposition spectrale de la matrice de similarité qui, dans ce cas-ci, est de grande taille. On présente deux approches différentes pour résoudre ce problème. On montre que les résultats obtenus par ces deux approches sont meilleurs que ceux obtenus avec un partitionnement de données basé sur une sélection uniforme de points. Le troisième article présente le problème de sélection de variables en régression linéaire et logistique face à un nombre élevé de covariables par une approche bayésienne. La sélection de variables est faite en recourant aux méthodes de Monte Carlo par chaînes de Markov, en utilisant l’algorithme de Metropolis-Hastings. On montre qu’en choisissant le processus ponctuel déterminantal comme loi a priori de l’espace des modèles, le sous-ensemble final de variables est meilleur que celui que l’on obtient avec une loi a priori uniforme.This thesis presents the determinantal point process, a probabilistic model that captures repulsion between points of a certain space. This repulsion is encompassed by a similarity matrix, the kernel matrix, which selects which points are more similar and then less likely to appear in the same subset. This point process gives more weight to subsets characterized by a larger diversity of its elements, which is not the case with the traditional uniform random sampling. Diversity has become a key concept in domains such as medicine, sociology, forensic sciences and behavioral sciences. The determinantal point process is considered a promising alternative to traditional sampling methods, since it takes into account the diversity of selected elements. It is already actively used in machine learning as a subset selection method. Its application in statistics is illustrated with three papers. The first paper presents the consensus clustering, which consists in running a clustering algorithm on the same data, a large number of times. To sample the initials points of the algorithm, we propose the determinantal point process as a sampling method instead of a uniform random sampling and show that the former option produces better clustering results. The second paper extends the methodology developed in the first paper to large-data. Such datasets impose a computational burden since sampling with the determinantal point process is based on the spectral decomposition of the large kernel matrix. We introduce two methods to deal with this issue. These methods also produce better clustering results than consensus clustering based on a uniform sampling of initial points. The third paper addresses the problem of variable selection for the linear model and the logistic regression, when the number of predictors is large. A Bayesian approach is adopted, using Markov Chain Monte Carlo methods with Metropolis-Hasting algorithm. We show that setting the determinantal point process as the prior distribution for the model space selects a better final model than the model selected by a uniform prior on the model space

    From Polar to Reed-Muller Codes:Unified Scaling, Non-standard Channels, and a Proven Conjecture

    Get PDF
    The year 2016, in which I am writing these words, marks the centenary of Claude Shannon, the father of information theory. In his landmark 1948 paper "A Mathematical Theory of Communication", Shannon established the largest rate at which reliable communication is possible, and he referred to it as the channel capacity. Since then, researchers have focused on the design of practical coding schemes that could approach such a limit. The road to channel capacity has been almost 70 years long and, after many ideas, occasional detours, and some rediscoveries, it has culminated in the description of low-complexity and provably capacity-achieving coding schemes, namely, polar codes and iterative codes based on sparse graphs. However, next-generation communication systems require an unprecedented performance improvement and the number of transmission settings relevant in applications is rapidly increasing. Hence, although Shannon's limit seems finally close at hand, new challenges are just around the corner. In this thesis, we trace a road that goes from polar to Reed-Muller codes and, by doing so, we investigate three main topics: unified scaling, non-standard channels, and capacity via symmetry. First, we consider unified scaling. A coding scheme is capacity-achieving when, for any rate smaller than capacity, the error probability tends to 0 as the block length becomes increasingly larger. However, the practitioner is often interested in more specific questions such as, "How much do we need to increase the block length in order to halve the gap between rate and capacity?". We focus our analysis on polar codes and develop a unified framework to rigorously analyze the scaling of the main parameters, i.e., block length, rate, error probability, and channel quality. Furthermore, in light of the recent success of a list decoding algorithm for polar codes, we provide scaling results on the performance of list decoders. Next, we deal with non-standard channels. When we say that a coding scheme achieves capacity, we typically consider binary memoryless symmetric channels. However, practical transmission scenarios often involve more complicated settings. For example, the downlink of a cellular system is modeled as a broadcast channel, and the communication on fiber links is inherently asymmetric. We propose provably optimal low-complexity solutions for these settings. In particular, we present a polar coding scheme that achieves the best known rate region for the broadcast channel, and we describe three paradigms to achieve the capacity of asymmetric channels. To do so, we develop general coding "primitives", such as the chaining construction that has already proved to be useful in a variety of communication problems. Finally, we show how to achieve capacity via symmetry. In the early days of coding theory, a popular paradigm consisted in exploiting the structure of algebraic codes to devise practical decoding algorithms. However, proving the optimality of such coding schemes remained an elusive goal. In particular, the conjecture that Reed-Muller codes achieve capacity dates back to the 1960s. We solve this open problem by showing that Reed-Muller codes and, in general, codes with sufficient symmetry are capacity-achieving over erasure channels under optimal MAP decoding. As the proof does not rely on the precise structure of the codes, we are able to show that symmetry alone guarantees optimal performance

    Proceedings of the 26th International Symposium on Theoretical Aspects of Computer Science (STACS'09)

    Get PDF
    The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ..
    corecore