209 research outputs found
Wireless Data Acquisition for Edge Learning: Data-Importance Aware Retransmission
By deploying machine-learning algorithms at the network edge, edge learning
can leverage the enormous real-time data generated by billions of mobile
devices to train AI models, which enable intelligent mobile applications. In
this emerging research area, one key direction is to efficiently utilize radio
resources for wireless data acquisition to minimize the latency of executing a
learning task at an edge server. Along this direction, we consider the specific
problem of retransmission decision in each communication round to ensure both
reliability and quantity of those training data for accelerating model
convergence. To solve the problem, a new retransmission protocol called
data-importance aware automatic-repeat-request (importance ARQ) is proposed.
Unlike the classic ARQ focusing merely on reliability, importance ARQ
selectively retransmits a data sample based on its uncertainty which helps
learning and can be measured using the model under training. Underpinning the
proposed protocol is a derived elegant communication-learning relation between
two corresponding metrics, i.e., signal-to-noise ratio (SNR) and data
uncertainty. This relation facilitates the design of a simple threshold based
policy for importance ARQ. The policy is first derived based on the classic
classifier model of support vector machine (SVM), where the uncertainty of a
data sample is measured by its distance to the decision boundary. The policy is
then extended to the more complex model of convolutional neural networks (CNN)
where data uncertainty is measured by entropy. Extensive experiments have been
conducted for both the SVM and CNN using real datasets with balanced and
imbalanced distributions. Experimental results demonstrate that importance ARQ
effectively copes with channel fading and noise in wireless data acquisition to
achieve faster model convergence than the conventional channel-aware ARQ.Comment: This is an updated version: 1) extension to general classifiers; 2)
consideration of imbalanced classification in the experiments. Submitted to
IEEE Journal for possible publicatio
Lifting Theorems Meet Information Complexity: Known and New Lower Bounds of Set-disjointness
Set-disjointness problems are one of the most fundamental problems in
communication complexity and have been extensively studied in past decades.
Given its importance, many lower bound techniques were introduced to prove
communication lower bounds of set-disjointness. Combining ideas from
information complexity and query-to-communication lifting theorems, we
introduce a density increment argument to prove communication lower bounds for
set-disjointness:
We give a simple proof showing that a large rectangle cannot be
-monochromatic for multi-party unique-disjointness.
We interpret the direct-sum argument as a density increment process and give
an alternative proof of randomized communication lower bounds for multi-party
unique-disjointness.
Avoiding full simulations in lifting theorems, we simplify and improve
communication lower bounds for sparse unique-disjointness.
Potential applications to be unified and improved by our density increment
argument are also discussed.Comment: Working Pape
Hybrid Beamforming via the Kronecker Decomposition for the Millimeter-Wave Massive MIMO Systems
Despite its promising performance gain, the realization of mmWave massive
MIMO still faces several practical challenges. In particular, implementing
massive MIMO in the digital domain requires hundreds of RF chains matching the
number of antennas. Furthermore, designing these components to operate at the
mmWave frequencies is challenging and costly. These motivated the recent
development of hybrid-beamforming where MIMO processing is divided for separate
implementation in the analog and digital domains, called the analog and digital
beamforming, respectively. Analog beamforming using a phase array introduces
uni-modulus constraints on the beamforming coefficients, rendering the
conventional MIMO techniques unsuitable and call for new designs. In this
paper, we present a systematic design framework for hybrid beamforming for
multi-cell multiuser massive MIMO systems over mmWave channels characterized by
sparse propagation paths. The framework relies on the decomposition of analog
beamforming vectors and path observation vectors into Kronecker products of
factors being uni-modulus vectors. Exploiting properties of Kronecker mixed
products, different factors of the analog beamformer are designed for either
nulling interference paths or coherently combining data paths. Furthermore, a
channel estimation scheme is designed for enabling the proposed hybrid
beamforming. The scheme estimates the AoA of data and interference paths by
analog beam scanning and data-path gains by analog beam steering. The
performance of the channel estimation scheme is analyzed. In particular, the
AoA spectrum resulting from beam scanning, which displays the magnitude
distribution of paths over the AoA range, is derived in closed-form. It is
shown that the inter-cell interference level diminishes inversely with the
array size, the square root of pilot sequence length and the spatial separation
between paths.Comment: Submitted to IEEE JSAC Special Issue on Millimeter Wave
Communications for Future Mobile Networks, minor revisio
Bayesian Over-the-Air FedAvg via Channel Driven Stochastic Gradient Langevin Dynamics
The recent development of scalable Bayesian inference methods has renewed
interest in the adoption of Bayesian learning as an alternative to conventional
frequentist learning that offers improved model calibration via uncertainty
quantification. Recently, federated averaging Langevin dynamics (FALD) was
introduced as a variant of federated averaging that can efficiently implement
distributed Bayesian learning in the presence of noiseless communications. In
this paper, we propose wireless FALD (WFALD), a novel protocol that realizes
FALD in wireless systems by integrating over-the-air computation and
channel-driven sampling for Monte Carlo updates. Unlike prior work on wireless
Bayesian learning, WFALD enables (\emph{i}) multiple local updates between
communication rounds; and (\emph{ii}) stochastic gradients computed by
mini-batch. A convergence analysis is presented in terms of the 2-Wasserstein
distance between the samples produced by WFALD and the targeted global
posterior distribution. Analysis and experiments show that, when the
signal-to-noise ratio is sufficiently large, channel noise can be fully
repurposed for Monte Carlo sampling, thus entailing no loss in performance.Comment: 6 pages, 4 figures, 26 references, submitte
- …