1,866 research outputs found

    Networking - A Statistical Physics Perspective

    Get PDF
    Efficient networking has a substantial economic and societal impact in a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption require new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with non-linear large scale systems. This paper aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications.Comment: (Review article) 71 pages, 14 figure

    Multiple Access for Massive Machine Type Communications

    Get PDF
    The internet we have known thus far has been an internet of people, as it has connected people with one another. However, these connections are forecasted to occupy only a minuscule of future communications. The internet of tomorrow is indeed: the internet of things. The Internet of Things (IoT) promises to improve all aspects of life by connecting everything to everything. An enormous amount of effort is being exerted to turn these visions into a reality. Sensors and actuators will communicate and operate in an automated fashion with no or minimal human intervention. In the current literature, these sensors and actuators are referred to as machines, and the communication amongst these machines is referred to as Machine to Machine (M2M) communication or Machine-Type Communication (MTC). As IoT requires a seamless mode of communication that is available anywhere and anytime, wireless communications will be one of the key enabling technologies for IoT. In existing wireless cellular networks, users with data to transmit first need to request channel access. All access requests are processed by a central unit that in return either grants or denies the access request. Once granted access, users' data transmissions are non-overlapping and interference free. However, as the number of IoT devices is forecasted to be in the order of hundreds of millions, if not billions, in the near future, the access channels of existing cellular networks are predicted to suffer from severe congestion and, thus, incur unpredictable latencies in the system. On the other hand, in random access, users with data to transmit will access the channel in an uncoordinated and probabilistic fashion, thus, requiring little or no signalling overhead. However, this reduction in overhead is at the expense of reliability and efficiency due to the interference caused by contending users. In most existing random access schemes, packets are lost when they experience interference from other packets transmitted over the same resources. Moreover, most existing random access schemes are best-effort schemes with almost no Quality of Service (QoS) guarantees. In this thesis, we investigate the performance of different random access schemes in different settings to resolve the problem of the massive access of IoT devices with diverse QoS guarantees. First, we take a step towards re-designing existing random access protocols such that they are more practical and more efficient. For many years, researchers have adopted the collision channel model in random access schemes: a collision is the event of two or more users transmitting over the same time-frequency resources. In the event of a collision, all the involved data is lost, and users need to retransmit their information. However, in practice, data can be recovered even in the presence of interference provided that the power of the signal is sufficiently larger than the power of the noise and the power of the interference. Based on this, we re-define the event of collision as the event of the interference power exceeding a pre-determined threshold. We propose a new analytical framework to compute the probability of packet recovery failure inspired by error control codes on graph. We optimize the random access parameters based on evolution strategies. Our results show a significant improvement in performance in terms of reliability and efficiency. Next, we focus on supporting the heterogeneous IoT applications and accommodating their diverse latency and reliability requirements in a unified access scheme. We propose a multi-stage approach where each group of applications transmits in different stages with different probabilities. We propose a new analytical framework to compute the probability of packet recovery failure for each group in each stage. We also optimize the random access parameters using evolution strategies. Our results show that our proposed scheme can outperform coordinated access schemes of existing cellular networks when the number of users is very large. Finally, we investigate random non-orthogonal multiple access schemes that are known to achieve a higher spectrum efficiency and are known to support higher loads. In our proposed scheme, user detection and channel estimation are carried out via pilot sequences that are transmitted simultaneously with the user's data. Here, a collision event is defined as the event of two or more users selecting the same pilot sequence. All collisions are regarded as interference to the remaining users. We first study the distribution of the interference power and derive its expression. Then, we use this expression to derive simple yet accurate analytical bounds on the throughput and outage probability of the proposed scheme. We consider both joint decoding as well as successive interference cancellation. We show that the proposed scheme is especially useful in the case of short packet transmission

    Coping with spectrum and energy scarcity in Wireless Networks: a Stochastic Optimization approach to Cognitive Radio and Energy Harvesting

    Get PDF
    In the last decades, we have witnessed an explosion of wireless communications and networking, spurring a great interest in the research community. The design of wireless networks is challenged by the scarcity of resources, especially spectrum and energy. In this thesis, we explore the potential offered by two novel technologies to cope with spectrum and energy scarcity: Cognitive Radio (CR) and Energy Harvesting (EH). CR is a novel paradigm for improving the spectral efficiency in wireless networks, by enabling the coexistence of an incumbent legacy system and an opportunistic system with CR capability. We investigate a technique where the CR system exploits the temporal redundancy introduced by the Hybrid Automatic Retransmission reQuest (HARQ) protocol implemented by the legacy system to perform interference cancellation, thus enhancing its own throughput. Recently, EH has been proposed to cope with energy scarcity in Wireless Sensor Networks (WSNs). Devices with EH capability harvest energy from the environment, e.g., solar, wind, heat or piezo-electric, to power their circuitry and to perform data sensing, processing and communication tasks. Due to the random energy supply, how to best manage the available energy is an open research issue. In the second part of this thesis, we design control policies for EH devices, and investigate the impact of factors such as the finite battery storage, time-correlation in the EH process and battery degradation phenomena on the performance of such systems. We cast both paradigms in a stochastic optimization framework, and investigate techniques to cope with spectrum and energy scarcity by opportunistically leveraging interference and ambient energy, respectively, whose benefits are demonstrated both by theoretical analysis and numerically. As an additional topic, we investigate the issue of channel estimation in UltraWide-Band (UWB) systems. Due to the large transmission bandwidth, the channel has been typically modeled as sparse. However, some propagation phenomena, e.g., scattering from rough surfaces and frequency distortion, are better modeled by a diffuse channel. We propose a novel Hybrid Sparse/Diffuse (HSD) channel model which captures both components, and design channel estimators based on it

    Stochastic models for cloud data backups and video streaming on virtual reality headsets

    Get PDF

    Caching and Distributed Storage:Models, Limits and Designs

    Get PDF
    A simple task of storing a database or transferring it to a different point via a communication channel turns far more complex as the size of the database grows large. Limited bandwidth available for transmission plays a central role in this predicament. In two broad contexts, Content Distribution Networks (CDN) and Distributed Storage Systems (DSS), the adverse effect of the growing size of the database on the transmission bandwidth can be mitigated by exploiting additional storage units. Characterizing the optimal tradeoff between the transmission bandwidth and the storage size is the central quest to numerous works in the recent literature, including this thesis. In a DSS, individual servers fail routinely and must be replicated by downloading data from the remaining servers, a task referred to as the repair process. To render this process of repairing failed servers more straightforward and efficient, various forms of redundancy can be introduced in the system. One of the benchmarks by which the reliability of a DSS is measured is availability, which refers to the number of disjoint sets of servers that can help to repair any failed server. We study the interaction of this parameter with the amount of traffic generated during the repair process (the repair bandwidth) and the storage size. In particular, we propose a novel DSS architecture which can achieve much smaller repair bandwidth for the same availability, compared to the state of the art. In the context of CDNs, the network can be highly congested during certain hours of the day and almost idle at other times. This variability of traffic can be reduced by utilizing local storage units that prefetch the data while the network is idle. This approach is referred to as caching. In this thesis we analyze a CDN that has access to independent data from various content providers. We characterize the best caching strategy in terms of the aggregate peak traffic under the constraint that coding across contents from different libraries is prohibited. Furthermore we prove that under certain set of conditions this restriction is without loss of optimality
    • …
    corecore