270 research outputs found
Principles of Physical Layer Security in Multiuser Wireless Networks: A Survey
This paper provides a comprehensive review of the domain of physical layer
security in multiuser wireless networks. The essential premise of
physical-layer security is to enable the exchange of confidential messages over
a wireless medium in the presence of unauthorized eavesdroppers without relying
on higher-layer encryption. This can be achieved primarily in two ways: without
the need for a secret key by intelligently designing transmit coding
strategies, or by exploiting the wireless communication medium to develop
secret keys over public channels. The survey begins with an overview of the
foundations dating back to the pioneering work of Shannon and Wyner on
information-theoretic security. We then describe the evolution of secure
transmission strategies from point-to-point channels to multiple-antenna
systems, followed by generalizations to multiuser broadcast, multiple-access,
interference, and relay networks. Secret-key generation and establishment
protocols based on physical layer mechanisms are subsequently covered.
Approaches for secrecy based on channel coding design are then examined, along
with a description of inter-disciplinary approaches based on game theory and
stochastic geometry. The associated problem of physical-layer message
authentication is also introduced briefly. The survey concludes with
observations on potential research directions in this area.Comment: 23 pages, 10 figures, 303 refs. arXiv admin note: text overlap with
arXiv:1303.1609 by other authors. IEEE Communications Surveys and Tutorials,
201
Frequency planning for clustered jointly processed cellular multiple access channel
Owing to limited resources, it is hard to guarantee minimum service levels to all users in conventional cellular systems. Although global cooperation of access points (APs) is considered promising, practical means of enhancing efficiency of cellular systems is by considering distributed or clustered jointly processed APs. The authors present a novel `quality of service (QoS) balancing scheme' to maximise sum rate as well as achieve cell-based fairness for clustered jointly processed cellular multiple access channel (referred to as CC-CMAC). Closed-form cell level QoS balancing function is derived. Maximisation of this function is proved as an NP hard problem. Hence, using power-frequency granularity, a modified genetic algorithm (GA) is proposed. For inter site distance (ISD) <; 500 m, results show that with no fairness considered, the upper bound of the capacity region is achievable. Applying hard fairness restraints on users transmitting in moderately dense AP system, 20% reduction in sum rate contribution increases fairness by upto 10%. The flexible QoS can be applied on a GA-based centralised dynamic frequency planner architecture
Optimality of Orthogonal Access for One-dimensional Convex Cellular Networks
It is shown that a greedy orthogonal access scheme achieves the sum degrees
of freedom of all one-dimensional (all nodes placed along a straight line)
convex cellular networks (where cells are convex regions) when no channel
knowledge is available at the transmitters except the knowledge of the network
topology. In general, optimality of orthogonal access holds neither for
two-dimensional convex cellular networks nor for one-dimensional non-convex
cellular networks, thus revealing a fundamental limitation that exists only
when both one-dimensional and convex properties are simultaneously enforced, as
is common in canonical information theoretic models for studying cellular
networks. The result also establishes the capacity of the corresponding class
of index coding problems
Low-Density Code-Domain NOMA: Better Be Regular
A closed-form analytical expression is derived for the limiting empirical
squared singular value density of a spreading (signature) matrix corresponding
to sparse low-density code-domain (LDCD) non-orthogonal multiple-access (NOMA)
with regular random user-resource allocation. The derivation relies on
associating the spreading matrix with the adjacency matrix of a large
semiregular bipartite graph. For a simple repetition-based sparse spreading
scheme, the result directly follows from a rigorous analysis of spectral
measures of infinite graphs. Turning to random (sparse) binary spreading, we
harness the cavity method from statistical physics, and show that the limiting
spectral density coincides in both cases. Next, we use this density to compute
the normalized input-output mutual information of the underlying vector channel
in the large-system limit. The latter may be interpreted as the achievable
total throughput per dimension with optimum processing in a corresponding
multiple-access channel setting or, alternatively, in a fully-symmetric
broadcast channel setting with full decoding capabilities at each receiver.
Surprisingly, the total throughput of regular LDCD-NOMA is found to be not only
superior to that achieved with irregular user-resource allocation, but also to
the total throughput of dense randomly-spread NOMA, for which optimum
processing is computationally intractable. In contrast, the superior
performance of regular LDCD-NOMA can be potentially achieved with a feasible
message-passing algorithm. This observation may advocate employing regular,
rather than irregular, LDCD-NOMA in 5G cellular physical layer design.Comment: Accepted for publication in the IEEE International Symposium on
Information Theory (ISIT), June 201
Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation
Sensor networks potentially feature large numbers of nodes that can sense
their environment over time, communicate with each other over a wireless
network, and process information. They differ from data networks in that the
network as a whole may be designed for a specific application. We study the
theoretical foundations of such large scale sensor networks, addressing four
fundamental issues- connectivity, capacity, clocks and function computation.
To begin with, a sensor network must be connected so that information can
indeed be exchanged between nodes. The connectivity graph of an ad-hoc network
is modeled as a random graph and the critical range for asymptotic connectivity
is determined, as well as the critical number of neighbors that a node needs to
connect to. Next, given connectivity, we address the issue of how much data can
be transported over the sensor network. We present fundamental bounds on
capacity under several models, as well as architectural implications for how
wireless communication should be organized.
Temporal information is important both for the applications of sensor
networks as well as their operation.We present fundamental bounds on the
synchronizability of clocks in networks, and also present and analyze
algorithms for clock synchronization. Finally we turn to the issue of gathering
relevant information, that sensor networks are designed to do. One needs to
study optimal strategies for in-network aggregation of data, in order to
reliably compute a composite function of sensor measurements, as well as the
complexity of doing so. We address the issue of how such computation can be
performed efficiently in a sensor network and the algorithms for doing so, for
some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE
Lecture Notes on Network Information Theory
These lecture notes have been converted to a book titled Network Information
Theory published recently by Cambridge University Press. This book provides a
significantly expanded exposition of the material in the lecture notes as well
as problems and bibliographic notes at the end of each chapter. The authors are
currently preparing a set of slides based on the book that will be posted in
the second half of 2012. More information about the book can be found at
http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of
the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/
Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures
Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs
- …