124,664 research outputs found
Principles of Physical Layer Security in Multiuser Wireless Networks: A Survey
This paper provides a comprehensive review of the domain of physical layer
security in multiuser wireless networks. The essential premise of
physical-layer security is to enable the exchange of confidential messages over
a wireless medium in the presence of unauthorized eavesdroppers without relying
on higher-layer encryption. This can be achieved primarily in two ways: without
the need for a secret key by intelligently designing transmit coding
strategies, or by exploiting the wireless communication medium to develop
secret keys over public channels. The survey begins with an overview of the
foundations dating back to the pioneering work of Shannon and Wyner on
information-theoretic security. We then describe the evolution of secure
transmission strategies from point-to-point channels to multiple-antenna
systems, followed by generalizations to multiuser broadcast, multiple-access,
interference, and relay networks. Secret-key generation and establishment
protocols based on physical layer mechanisms are subsequently covered.
Approaches for secrecy based on channel coding design are then examined, along
with a description of inter-disciplinary approaches based on game theory and
stochastic geometry. The associated problem of physical-layer message
authentication is also introduced briefly. The survey concludes with
observations on potential research directions in this area.Comment: 23 pages, 10 figures, 303 refs. arXiv admin note: text overlap with
arXiv:1303.1609 by other authors. IEEE Communications Surveys and Tutorials,
201
GPUs as Storage System Accelerators
Massively multicore processors, such as Graphics Processing Units (GPUs),
provide, at a comparable price, a one order of magnitude higher peak
performance than traditional CPUs. This drop in the cost of computation, as any
order-of-magnitude drop in the cost per unit of performance for a class of
system components, triggers the opportunity to redesign systems and to explore
new ways to engineer them to recalibrate the cost-to-performance relation. This
project explores the feasibility of harnessing GPUs' computational power to
improve the performance, reliability, or security of distributed storage
systems. In this context, we present the design of a storage system prototype
that uses GPU offloading to accelerate a number of computationally intensive
primitives based on hashing, and introduce techniques to efficiently leverage
the processing power of GPUs. We evaluate the performance of this prototype
under two configurations: as a content addressable storage system that
facilitates online similarity detection between successive versions of the same
file and as a traditional system that uses hashing to preserve data integrity.
Further, we evaluate the impact of offloading to the GPU on competing
applications' performance. Our results show that this technique can bring
tangible performance gains without negatively impacting the performance of
concurrently running applications.Comment: IEEE Transactions on Parallel and Distributed Systems, 201
Self-stabilization Overhead: an Experimental Case Study on Coded Atomic Storage
Shared memory emulation can be used as a fault-tolerant and highly available
distributed storage solution or as a low-level synchronization primitive.
Attiya, Bar-Noy, and Dolev were the first to propose a single-writer,
multi-reader linearizable register emulation where the register is replicated
to all servers. Recently, Cadambe et al. proposed the Coded Atomic Storage
(CAS) algorithm, which uses erasure coding for achieving data redundancy with
much lower communication cost than previous algorithmic solutions.
Although CAS can tolerate server crashes, it was not designed to recover from
unexpected, transient faults, without the need of external (human)
intervention. In this respect, Dolev, Petig, and Schiller have recently
developed a self-stabilizing version of CAS, which we call CASSS. As one would
expect, self-stabilization does not come as a free lunch; it introduces,
mainly, communication overhead for detecting inconsistencies and stale
information. So, one would wonder whether the overhead introduced by
self-stabilization would nullify the gain of erasure coding.
To answer this question, we have implemented and experimentally evaluated the
CASSS algorithm on PlanetLab; a planetary scale distributed infrastructure. The
evaluation shows that our implementation of CASSS scales very well in terms of
the number of servers, the number of concurrent clients, as well as the size of
the replicated object. More importantly, it shows (a) to have only a constant
overhead compared to the traditional CAS algorithm (which we also implement)
and (b) the recovery period (after the last occurrence of a transient fault) is
as fast as a few client (read/write) operations. Our results suggest that CASSS
does not significantly impact efficiency while dealing with automatic recovery
from transient faults and bounded size of needed resources
- …