9,174 research outputs found
The Distributed MIMO Scenario: Can Ideal ADCs Be Replaced by Low-resolution ADCs?
This letter considers the architecture of distributed antenna system, which
is made up of a massive number of single-antenna remote radio heads (RRHs),
some with full-resolution but others with low-resolution analog-to-digital
converter (ADC) receivers. This architecture is greatly motivated by its high
energy efficiency and low-cost implementation. We derive the worst-case uplink
spectral efficiency (SE) of the system assuming a frequency-flat channel and
maximum-ratio combining (MRC), and reveal that the SE increases as the number
of quantization bits for the low-resolution ADCs increases, and the SE
converges as the number of RRHs with low-resolution ADCs grows. Our results
furthermore demonstrate that a great improvement can be obtained by adding a
majority of RRHs with low-resolution ADC receivers, if sufficient quantization
precision and an acceptable proportion of high-to-low resolution RRHs are used.Comment: 4 pages, to be published in IEEE Wireless Communications Letter
Blind Demixing for Low-Latency Communication
In the next generation wireless networks, lowlatency communication is
critical to support emerging diversified applications, e.g., Tactile Internet
and Virtual Reality. In this paper, a novel blind demixing approach is
developed to reduce the channel signaling overhead, thereby supporting
low-latency communication. Specifically, we develop a low-rank approach to
recover the original information only based on a single observed vector without
any channel estimation. Unfortunately, this problem turns out to be a highly
intractable non-convex optimization problem due to the multiple non-convex
rankone constraints. To address the unique challenges, the quotient manifold
geometry of product of complex asymmetric rankone matrices is exploited by
equivalently reformulating original complex asymmetric matrices to the
Hermitian positive semidefinite matrices. We further generalize the geometric
concepts of the complex product manifolds via element-wise extension of the
geometric concepts of the individual manifolds. A scalable Riemannian
trust-region algorithm is then developed to solve the blind demixing problem
efficiently with fast convergence rates and low iteration cost. Numerical
results will demonstrate the algorithmic advantages and admirable performance
of the proposed algorithm compared with the state-of-art methods.Comment: 14 pages, accepted by IEEE Transaction on Wireless Communicatio
MPC for MPC: Secure Computation on a Massively Parallel Computing Architecture
Massively Parallel Computation (MPC) is a model of computation widely believed to best capture realistic parallel computing architectures such as large-scale MapReduce and Hadoop clusters. Motivated by the fact that many data analytics tasks performed on these platforms involve sensitive user data, we initiate the theoretical exploration of how to leverage MPC architectures to enable efficient, privacy-preserving computation over massive data. Clearly if a computation task does not lend itself to an efficient implementation on MPC even without security, then we cannot hope to compute it efficiently on MPC with security. We show, on the other hand, that any task that can be efficiently computed on MPC can also be securely computed with comparable efficiency. Specifically, we show the following results:
- any MPC algorithm can be compiled to a communication-oblivious counterpart while asymptotically preserving its round and space complexity, where communication-obliviousness ensures that any network intermediary observing the communication patterns learn no information about the secret inputs;
- assuming the existence of Fully Homomorphic Encryption with a suitable notion of compactness and other standard cryptographic assumptions, any MPC algorithm can be compiled to a secure counterpart that defends against an adversary who controls not only intermediate network routers but additionally up to 1/3 - ? fraction of machines (for an arbitrarily small constant ?) - moreover, this compilation preserves the round complexity tightly, and preserves the space complexity upto a multiplicative security parameter related blowup.
As an initial exploration of this important direction, our work suggests new definitions and proposes novel protocols that blend algorithmic and cryptographic techniques
Physical Randomness Extractors: Generating Random Numbers with Minimal Assumptions
How to generate provably true randomness with minimal assumptions? This
question is important not only for the efficiency and the security of
information processing, but also for understanding how extremely unpredictable
events are possible in Nature. All current solutions require special structures
in the initial source of randomness, or a certain independence relation among
two or more sources. Both types of assumptions are impossible to test and
difficult to guarantee in practice. Here we show how this fundamental limit can
be circumvented by extractors that base security on the validity of physical
laws and extract randomness from untrusted quantum devices. In conjunction with
the recent work of Miller and Shi (arXiv:1402:0489), our physical randomness
extractor uses just a single and general weak source, produces an arbitrarily
long and near-uniform output, with a close-to-optimal error, secure against
all-powerful quantum adversaries, and tolerating a constant level of
implementation imprecision. The source necessarily needs to be unpredictable to
the devices, but otherwise can even be known to the adversary.
Our central technical contribution, the Equivalence Lemma, provides a general
principle for proving composition security of untrusted-device protocols. It
implies that unbounded randomness expansion can be achieved simply by
cross-feeding any two expansion protocols. In particular, such an unbounded
expansion can be made robust, which is known for the first time. Another
significant implication is, it enables the secure randomness generation and key
distribution using public randomness, such as that broadcast by NIST's
Randomness Beacon. Our protocol also provides a method for refuting local
hidden variable theories under a weak assumption on the available randomness
for choosing the measurement settings.Comment: A substantial re-writing of V2, especially on model definitions. An
abstract model of robustness is added and the robustness claim in V2 is made
rigorous. Focuses on quantum-security. A future update is planned to address
non-signaling securit
- …