2,892 research outputs found
Spectrum Allocation in Networks with Finite Sources and Data-Driven Characterization of Users\u27 Stochastic Dynamics
During emergency situations, the public safety communication systems (PSCSs) get overloaded with high traffic loads. Note that these PSCSs are finite source networks. The goal of our study is to propose techniques for an efficient allocation of spectrum in finite source networks that can help alleviate the overloading of PSCSs. In a PSCS, there are two system segments, one for the system-access control and the other for communications, each having dedicated frequency channels. The first part of our research, consisting of three projects, is based on modeling and analysis of finite source systems for optimal spectrum allocation, for both access-control and communications. In the first project, Chapter 2, we study the allocation of spectrum based on the concept of cognitive radio systems. In the second project, Chapter 3, we study the optimal communication channel allocation by call admission and preemption control. In the third project, Chapter 4, we study the optimal joint allocation of frequency channels for access-control and communications. Note that the aforementioned spectrum allocation techniques require the knowledge of the call traffic parameters and the priority levels of the users in the system. For practical systems, these required pieces of information are extracted from the call records meta-data. A key fact that should be considered while analyzing the call records is that the call arrival traffic and the users priority levels change with a change in events on the ground. This is so because a change in events on the ground affects the communication behavior of the users in the system, which affects the call arrival traffic and the priority levels of the users. Thus, the first and the foremost step in analyzing the call records data for a given user, for extracting the call traffic information, is to segment the data into time intervals of homogeneous or stationary communication behavior of the user. Note that such a segmentation of the data of a practical PSCS is the goal of our fourth project, Chapter 5, which constitutes the second part of our study
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Cram\'er-Rao Bounds for Polynomial Signal Estimation using Sensors with AR(1) Drift
We seek to characterize the estimation performance of a sensor network where
the individual sensors exhibit the phenomenon of drift, i.e., a gradual change
of the bias. Though estimation in the presence of random errors has been
extensively studied in the literature, the loss of estimation performance due
to systematic errors like drift have rarely been looked into. In this paper, we
derive closed-form Fisher Information matrix and subsequently Cram\'er-Rao
bounds (upto reasonable approximation) for the estimation accuracy of
drift-corrupted signals. We assume a polynomial time-series as the
representative signal and an autoregressive process model for the drift. When
the Markov parameter for drift \rho<1, we show that the first-order effect of
drift is asymptotically equivalent to scaling the measurement noise by an
appropriate factor. For \rho=1, i.e., when the drift is non-stationary, we show
that the constant part of a signal can only be estimated inconsistently
(non-zero asymptotic variance). Practical usage of the results are demonstrated
through the analysis of 1) networks with multiple sensors and 2) bandwidth
limited networks communicating only quantized observations.Comment: 14 pages, 6 figures, This paper will appear in the Oct/Nov 2012 issue
of IEEE Transactions on Signal Processin
Federated Learning-Based Interference Modeling for Vehicular Dynamic Spectrum Access
A platoon-based driving is a technology allowing vehicles to follow each
other at close distances to, e.g., save fuel. However, it requires reliable
wireless communications to adjust their speeds. Recent studies have shown that
the frequency band dedicated for vehicle-to-vehicle communications can be too
busy for intra-platoon communications. Thus it is reasonable to use additional
spectrum resources, of low occupancy, i.e., secondary spectrum channels. The
challenge is to model the interference in those channels to enable proper
channel selection. In this paper, we propose a two-layered Radio Environment
Map (REM) that aims at providing platoons with accurate location-dependent
interference models by using the Federated Learning approach. Each platoon is
equipped with a Local REM that is updated on the basis of raw interference
samples and previous interference model stored in the Global REM. The model in
global REM is obtained by merging models reported by platoons. The nodes
exchange only parameters of interference models, reducing the required control
channel capacity. Moreover, in the proposed architecture platoon can utilize
Local REM to predict channel occupancy, even when the connection to the Global
REM is temporarily unavailable. The proposed system is validated via computer
simulations considering non-trivial interference patterns
Massive MIMO for Internet of Things (IoT) Connectivity
Massive MIMO is considered to be one of the key technologies in the emerging
5G systems, but also a concept applicable to other wireless systems. Exploiting
the large number of degrees of freedom (DoFs) of massive MIMO essential for
achieving high spectral efficiency, high data rates and extreme spatial
multiplexing of densely distributed users. On the one hand, the benefits of
applying massive MIMO for broadband communication are well known and there has
been a large body of research on designing communication schemes to support
high rates. On the other hand, using massive MIMO for Internet-of-Things (IoT)
is still a developing topic, as IoT connectivity has requirements and
constraints that are significantly different from the broadband connections. In
this paper we investigate the applicability of massive MIMO to IoT
connectivity. Specifically, we treat the two generic types of IoT connections
envisioned in 5G: massive machine-type communication (mMTC) and ultra-reliable
low-latency communication (URLLC). This paper fills this important gap by
identifying the opportunities and challenges in exploiting massive MIMO for IoT
connectivity. We provide insights into the trade-offs that emerge when massive
MIMO is applied to mMTC or URLLC and present a number of suitable communication
schemes. The discussion continues to the questions of network slicing of the
wireless resources and the use of massive MIMO to simultaneously support IoT
connections with very heterogeneous requirements. The main conclusion is that
massive MIMO can bring benefits to the scenarios with IoT connectivity, but it
requires tight integration of the physical-layer techniques with the protocol
design.Comment: Submitted for publicatio
Interference Mitigation in Large Random Wireless Networks
A central problem in the operation of large wireless networks is how to deal
with interference -- the unwanted signals being sent by transmitters that a
receiver is not interested in. This thesis looks at ways of combating such
interference.
In Chapters 1 and 2, we outline the necessary information and communication
theory background, including the concept of capacity. We also include an
overview of a new set of schemes for dealing with interference known as
interference alignment, paying special attention to a channel-state-based
strategy called ergodic interference alignment.
In Chapter 3, we consider the operation of large regular and random networks
by treating interference as background noise. We consider the local performance
of a single node, and the global performance of a very large network.
In Chapter 4, we use ergodic interference alignment to derive the asymptotic
sum-capacity of large random dense networks. These networks are derived from a
physical model of node placement where signal strength decays over the distance
between transmitters and receivers. (See also arXiv:1002.0235 and
arXiv:0907.5165.)
In Chapter 5, we look at methods of reducing the long time delays incurred by
ergodic interference alignment. We analyse the tradeoff between reducing delay
and lowering the communication rate. (See also arXiv:1004.0208.)
In Chapter 6, we outline a problem that is equivalent to the problem of
pooled group testing for defective items. We then present some new work that
uses information theoretic techniques to attack group testing. We introduce for
the first time the concept of the group testing channel, which allows for
modelling of a wide range of statistical error models for testing. We derive
new results on the number of tests required to accurately detect defective
items, including when using sequential `adaptive' tests.Comment: PhD thesis, University of Bristol, 201
- …