12 research outputs found
Modeling and Analysis of Cellular Networks Using Stochastic Geometry: A Tutorial
This paper presents a tutorial on stochastic geometry (SG)-based analysis for cellular networks. This tutorial is distinguished by its depth with respect to wireless communication details and its focus on cellular networks. This paper starts by modeling and analyzing the baseband interference in a baseline single-tier downlink cellular network with single antenna base stations and universal frequency reuse. Then, it characterizes signal-to-interference-plus-noise-ratio and its related performance metrics. In particular, a unified approach to conduct error probability, outage probability, and transmission rate analysis is presented. Although the main focus of this paper is on cellular networks, the presented unified approach applies for other types of wireless networks that impose interference protection around receivers. This paper then extends the unified approach to capture cellular network characteristics (e.g., frequency reuse, multiple antenna, power control, etc.). It also presents numerical examples associated with demonstrations and discussions. To this end, this paper highlights the state-of-the-art research and points out future research directions
Performance evaluation of future wireless networks: node cooperation and aerial networks
Perhaps future historians will only refer to this era as the \emph{information age}, and will recognize it as a paramount milestone in mankind progress. One of the main pillars of this age is the ability to transmit and communicate information effectively and reliably, where wireless radio technology became one of the most vital enablers for such communication. A growth in radio communication demand is notably accelerating in a never-resting pace, pausing a great challenge not only on service providers but also on researches and innovators to explore out-of-the-box technologies. These challenges are mainly related to providing faster data communication over seamless, reliable and cost efficient wireless network, given the limited availability of physical radio resources, and taking into consideration the environmental impact caused by the increasing energy consumption. Traditional wireless communication is usually deployed in a cellular manner, where fixed base stations coordinate radio resources and play the role of an intermediate data handler. The concept of cellular networks and hotspots is widely adopted as the current stable scheme of wireless communication. However in many situations this fixed infrastructure could be impaired with severe damages caused by natural disasters, or could suffer congestions and traffic blockage. In addition to the fact that in the current networks any mobile-to-mobile data sessions should pass through the serving base station that might cause unnecessary energy consumption. In order to enhance the performance and reliability of future wireless networks and to reduce its environmental footprint, we explore two complementary concepts: the first is node cooperation and the second is aerial networks. With the ability of wireless nodes to cooperate lays two main possible opportunities; one is the ability of the direct delivery of information between the communicating nodes without relaying traffic through the serving base station, thus reducing energy consumption and alleviating traffic congestion. A second opportunity would be that one of the nodes helps a farther one by relaying its traffic towards the base station, thus extending network coverage and reliability. Both schemes can introduce significant energy saving and can enhance the overall availability of wireless networks in case of natural disasters. In addition to node cooperation, a complementary technology to explore is the \emph{aerial networks} where base stations are airborne on aerial platforms such as airships, UAVs or blimps. Aerial networks can provide a rapidly deployable coverage for remote areas or regions afflicted by natural disasters or even to patch surge traffic demand in public events. Where node cooperation can be implemented to complement both regular terrestrial coverage and to complement aerial networks. In this research, we explore these two complementary technologies, from both an experimental approach and from an analytic approach. From the experimental perspective we shed the light on the radio channel properties that is hosting terrestrial node cooperation and air-to-ground communication, namely we utilize both simulation results and practical measurements to formulate radio propagation models for device-to-device communication and for air-to-ground links. Furthermore we investigate radio spectrum availability for node cooperation in different urban environment, by conductive extensive mobile measurement survey. Within the experimental approach, we also investigate a novel concept of temporary cognitive femtocell network as an applied solution for public safety communication networks during the aftermath of a natural disaster. While from the analytical perspective, we utilize mathematical tools from stochastic geometry to formulate novel analytical methodologies, explaining some of the most important theoretical boundaries of the achievable enhancements in network performance promised by node cooperation. We start by determining the estimated coverage and rate received by mobile users from convectional cellular networks and from aerial platforms. After that we optimize this coverage and rate ensuring that relay nodes and users can fully exploit their coverage efficiently. We continue by analytically quantifying the cellular network performance during massive infrastructure failure, where some nodes play the role of low-power relays forming multi-hop communication links to assist farther nodes outside the reach of the healthy network coverage. In addition, we lay a mathematical framework for estimating the energy saving of a mediating relay assisting a pair of wireless devices, where we derive closed-form expressions for describing the geometrical zone where relaying is energy efficient. Furthermore, we introduce a novel analytic approach in analyzing the energy consumption of aerial-backhauled wireless nodes on ground fields through the assistance of an aerial base station, the novel mathematical framework is based on Mat\'{e}rn hard-core point process. Then we shed the light on the points interacting of these point processes quantifying their main properties. Throughout this thesis we relay on verifying the analytic results and formulas against computer simulations using Monte-Carlo analysis. We also present practical numerical examples to reflect the usefulness of the presented methodologies and results in real life scenarios. Most of the work presented in this dissertation was published in-part or as a whole in highly ranked peer-reviewed journals, conference proceedings, book chapters, or otherwise currently undergoing a review process. These publications are highlighted and identified in the course of this thesis. Finally, we wish the reader to enjoy exploring the journey of this thesis, and hope it will add more understanding to the promising new technologies of aerial networks and node cooperation
Recommended from our members
Spatial stochastic models for network analysis
This thesis proposes new stochastic interacting particle models for networks, and studies some fundamental properties of these models. This thesis considers two application areas of networking - engineering design questions in future wireless systems and algorithmic tasks in large scale graph structured data. The key innovation introduced in this thesis is to bring tools and ideas from stochastic geometry to bear on the problems in both these application domains. We identify certain fundamental questions in design and engineering both wireless systems and large scale graph structured data processing systems. Subsequently, we identify novel stochastic geometric models, that captures the fundamental properties of these networks, which forms the first research contribution. We then rigorously study these models, by bringing to bear new tools from stochastic geometry, random graphs, percolation and Markov processes to establish structural results and fundamental phase transitions in these models. Using our developed mathematical methodology, we then identify design insights and develop algorithms, which we demonstrate are instructive in many practical settings. In the setting of wireless systems, this thesis studies both ad-hoc and cellular networks. In the ad-hoc network setting, we aim to understand fundamental limits of the simplest possible protocol to access the spectrum, namely a link transmits whenever it has data to send by treating all interference as noise. Surprisingly this basic question itself was not understood, as the system dynamics is coupled spatially due to the interference links cause one another and temporally due to randomness in traffic arrivals. We propose a novel interacting particle model called the spatial birth-death wireless network model to understand the stability properties of the simple spectrum access protocol. Using tools from Palm calculus and fluid limit theory, we establish a tight characterization of when this model is stable. Furthermore, we show that whenever stable, the links in steady-state exhibit a form of clustering. Leveraging these structural results, we propose two mean field heuristics to obtain formulas for key performance metrics such as average delay experienced by a link. We empirically find that the proposed formulas for delay predicts accurately the system behavior. We subsequently study scalability properties of this model by introducing an appropriate infinite dimensional version of the model we call the Interference Queueing Networks model. The model consists of a queue located at each grid point of an infinite regular integer lattice, with the queues interacting with each other in a translation invariant fashion. We then prove several structural properties of the model namely, tight conditions for existence of stationary solutions and some sufficient conditions for uniqueness of stationary solutions. Remarkably, we obtain exact formula for mean delay in this model, unlike the continuum model where we relied on mean-field type heuristics to obtain insights. In the setting of cellular networks, we study optimal association schemes by mobile phones in the case when there are several possible base station technologies operating on orthogonal bands. We show that this choice leads to a performance gain we term technology diversity. Interestingly, we show that the performance gain relies on the amount of instantaneous information a user has on the various base station technologies that it can leverage to make the association decision. We outline optimal association schemes under various information settings that a user may have on the network. Moreover, we propose simple heuristics for association that relies on a user obtaining minimal instantaneous information and are thus practical to implement. We prove that in certain natural asymptotic regime of parameters, our proposed heuristic policy is also optimal, and thus quantifying the value of having fine grained information at a user for association. We empirically observe that the asymptotic result is valid even at finite parameter regimes that are typical in todays networks. In the application of analyzing large scale graph structured data, we consider the graph clustering problem with side information. Graph clustering is a standard and widely used task which consists in partitioning the set of nodes of a graph into underlying clusters where nodes in the same cluster are similar to each other and nodes across different clusters are different. Motivated by applications in social and biological networks, we consider the task of clustering nodes of a graph, when there is side information on the nodes, other than that contained in the graph. For instance in social networks, one has access to meta data about a person (node in a social graph) such as age, location, income etc, along with the combinatorial data of who are his friends on the social graph. Similarly, in biological networks, there is often meta-data about an experiment that provides additional contextual data about a node, in addition to the combinatorial data. In this thesis, we propose a generative model for such graph structured data with side information, which is inspired by random graph models in stochastic geometry such as the random connection model and the generative models for networks with clusters without contexts, such as the stochastic block model or the planted partition model. We propose a novel graph model called the planted partition random connection model. Roughly speaking, in this model, each node has two labels - an observable R [superscript d] valued (for some fixed d) feature label and an unobservable binary valued community label. Conditional on the node labels, edges are drawn at random in this graph depending on both the feature and community labels of the two end points. The clustering task consists in recovering the underlying partition of nodes corresponding to the respective community labels better than a random assignment, when given an observation of the graph generated and the features of all nodes. We show that if the 'density of nodes', i.e., average number of nodes having features in an unit volume of space of R [superscript d] is small, then no algorithm can cluster the graph that can asymptotically beat a random assignment of community labels. On the contrary, if the density of nodes is sufficiently high, we give a simple algorithm that recovers the true underlying partition strictly better a random assignment. We then apply the proposed algorithm to a problem in computational biology called Haplotype Phasing and observe empirically, that it obtains state of art results. This demonstrates, both the validity of our generative model, as well as our new algorithm.Electrical and Computer Engineerin
Diversity Combining under Interference Correlation in Wireless Networks
A theoretical framework is developed for analyzing the performance of diversity combining under interference correlation. Stochastic models for different types of diversity combining and networks are presented and used for analysis. These models consider relevant system aspects such as network density, path loss, channel fading, number of antennas, and transmitter/receiver processing. Theoretical results are derived, performance comparisons are presented, and design insights are obtained
Apprentissage statistique avec le processus ponctuel déterminantal
Cette thèse aborde le processus ponctuel déterminantal, un modèle probabiliste qui capture
la répulsion entre les points d’un certain espace. Celle-ci est déterminée par une matrice
de similarité, la matrice noyau du processus, qui spécifie quels points sont les plus similaires
et donc moins susceptibles de figurer dans un même sous-ensemble. Contrairement à la sélection
aléatoire uniforme, ce processus ponctuel privilégie les sous-ensembles qui contiennent
des points diversifiés et hétérogènes. La notion de diversité acquiert une importante grandissante
au sein de sciences comme la médecine, la sociologie, les sciences forensiques et les
sciences comportementales. Le processus ponctuel déterminantal offre donc une alternative
aux traditionnelles méthodes d’échantillonnage en tenant compte de la diversité des éléments
choisis. Actuellement, il est déjà très utilisé en apprentissage automatique comme modèle de
sélection de sous-ensembles. Son application en statistique est illustrée par trois articles. Le
premier article aborde le partitionnement de données effectué par un algorithme répété un
grand nombre de fois sur les mêmes données, le partitionnement par consensus. On montre
qu’en utilisant le processus ponctuel déterminantal pour sélectionner les points initiaux de
l’algorithme, la partition de données finale a une qualité supérieure à celle que l’on obtient
en sélectionnant les points de façon uniforme. Le deuxième article étend la méthodologie
du premier article aux données ayant un grand nombre d’observations. Ce cas impose un
effort computationnel additionnel, étant donné que la sélection de points par le processus
ponctuel déterminantal passe par la décomposition spectrale de la matrice de similarité qui,
dans ce cas-ci, est de grande taille. On présente deux approches différentes pour résoudre ce
problème. On montre que les résultats obtenus par ces deux approches sont meilleurs que
ceux obtenus avec un partitionnement de données basé sur une sélection uniforme de points.
Le troisième article présente le problème de sélection de variables en régression linéaire et
logistique face à un nombre élevé de covariables par une approche bayésienne. La sélection
de variables est faite en recourant aux méthodes de Monte Carlo par chaînes de Markov,
en utilisant l’algorithme de Metropolis-Hastings. On montre qu’en choisissant le processus
ponctuel déterminantal comme loi a priori de l’espace des modèles, le sous-ensemble final de
variables est meilleur que celui que l’on obtient avec une loi a priori uniforme.This thesis presents the determinantal point process, a probabilistic model that captures
repulsion between points of a certain space. This repulsion is encompassed by a similarity
matrix, the kernel matrix, which selects which points are more similar and then less likely to
appear in the same subset. This point process gives more weight to subsets characterized by
a larger diversity of its elements, which is not the case with the traditional uniform random
sampling. Diversity has become a key concept in domains such as medicine, sociology,
forensic sciences and behavioral sciences. The determinantal point process is considered
a promising alternative to traditional sampling methods, since it takes into account the
diversity of selected elements. It is already actively used in machine learning as a subset
selection method. Its application in statistics is illustrated with three papers. The first
paper presents the consensus clustering, which consists in running a clustering algorithm
on the same data, a large number of times. To sample the initials points of the algorithm,
we propose the determinantal point process as a sampling method instead of a uniform
random sampling and show that the former option produces better clustering results. The
second paper extends the methodology developed in the first paper to large-data. Such
datasets impose a computational burden since sampling with the determinantal point process
is based on the spectral decomposition of the large kernel matrix. We introduce two methods
to deal with this issue. These methods also produce better clustering results than consensus
clustering based on a uniform sampling of initial points. The third paper addresses the
problem of variable selection for the linear model and the logistic regression, when the
number of predictors is large. A Bayesian approach is adopted, using Markov Chain Monte
Carlo methods with Metropolis-Hasting algorithm. We show that setting the determinantal
point process as the prior distribution for the model space selects a better final model than
the model selected by a uniform prior on the model space
From Polar to Reed-Muller Codes:Unified Scaling, Non-standard Channels, and a Proven Conjecture
The year 2016, in which I am writing these words, marks the centenary of Claude Shannon, the father of information theory. In his landmark 1948 paper "A Mathematical Theory of Communication", Shannon established the largest rate at which reliable communication is possible, and he referred to it as the channel capacity. Since then, researchers have focused on the design of practical coding schemes that could approach such a limit. The road to channel capacity has been almost 70 years long and, after many ideas, occasional detours, and some rediscoveries, it has culminated in the description of low-complexity and provably capacity-achieving coding schemes, namely, polar codes and iterative codes based on sparse graphs. However, next-generation communication systems require an unprecedented performance improvement and the number of transmission settings relevant in applications is rapidly increasing. Hence, although Shannon's limit seems finally close at hand, new challenges are just around the corner. In this thesis, we trace a road that goes from polar to Reed-Muller codes and, by doing so, we investigate three main topics: unified scaling, non-standard channels, and capacity via symmetry. First, we consider unified scaling. A coding scheme is capacity-achieving when, for any rate smaller than capacity, the error probability tends to 0 as the block length becomes increasingly larger. However, the practitioner is often interested in more specific questions such as, "How much do we need to increase the block length in order to halve the gap between rate and capacity?". We focus our analysis on polar codes and develop a unified framework to rigorously analyze the scaling of the main parameters, i.e., block length, rate, error probability, and channel quality. Furthermore, in light of the recent success of a list decoding algorithm for polar codes, we provide scaling results on the performance of list decoders. Next, we deal with non-standard channels. When we say that a coding scheme achieves capacity, we typically consider binary memoryless symmetric channels. However, practical transmission scenarios often involve more complicated settings. For example, the downlink of a cellular system is modeled as a broadcast channel, and the communication on fiber links is inherently asymmetric. We propose provably optimal low-complexity solutions for these settings. In particular, we present a polar coding scheme that achieves the best known rate region for the broadcast channel, and we describe three paradigms to achieve the capacity of asymmetric channels. To do so, we develop general coding "primitives", such as the chaining construction that has already proved to be useful in a variety of communication problems. Finally, we show how to achieve capacity via symmetry. In the early days of coding theory, a popular paradigm consisted in exploiting the structure of algebraic codes to devise practical decoding algorithms. However, proving the optimality of such coding schemes remained an elusive goal. In particular, the conjecture that Reed-Muller codes achieve capacity dates back to the 1960s. We solve this open problem by showing that Reed-Muller codes and, in general, codes with sufficient symmetry are capacity-achieving over erasure channels under optimal MAP decoding. As the proof does not rely on the precise structure of the codes, we are able to show that symmetry alone guarantees optimal performance
Proceedings of the 26th International Symposium on Theoretical Aspects of Computer Science (STACS'09)
The Symposium on Theoretical Aspects of Computer Science (STACS) is held alternately in France and in Germany. The conference of February 26-28, 2009, held in Freiburg, is the 26th in this series. Previous meetings took place in Paris (1984), Saarbr¨ucken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), W¨urzburg (1993), Caen (1994), M¨unchen (1995), Grenoble (1996), L¨ubeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), and Bordeaux (2008). ..