434 research outputs found
Joint Activity-Delay Detection and Channel Estimation for Asynchronous Massive Random Access: A Free Probability Theory Approach
Grant-free random access (RA) has been recognized as a promising solution to
support massive connectivity due to the removal of the uplink grant request
procedures. While most endeavours assume perfect synchronization among users
and the base station, this paper investigates asynchronous grant-free massive
RA, and develop efficient algorithms for joint user activity detection,
synchronization delay detection, and channel estimation. Considering the
sparsity on user activity, we formulate a sparse signal recovery problem and
propose to utilize the framework of orthogonal approximate message passing
(OAMP) to deal with the non-independent and identically distributed (i.i.d.)
Gaussian pilot matrices caused by the synchronization delays. In particular, an
OAMP-based algorithm is developed to fully harness the common sparsity among
received pilot signals from multiple base station antennas. To reduce the
computational complexity, we further propose a free probability AMP
(FPAMP)-based algorithm, which exploits the rectangular free cumulants to make
the cost-effective AMP framework compatible to general pilot matrices.
Simulation results demonstrate that the two proposed algorithms outperform
various baselines, and the FPAMP-based algorithm reduces 40% of the
computations while maintaining comparable detection/estimation accuracy with
the OAMP-based algorithm.Comment: arXiv admin note: text overlap with arXiv:2305.1237
Joint Activity Detection, Channel Estimation, and Data Decoding for Grant-free Massive Random Access
In the massive machine-type communication (mMTC) scenario, a large number of
devices with sporadic traffic need to access the network on limited radio
resources. While grant-free random access has emerged as a promising mechanism
for massive access, its potential has not been fully unleashed. In particular,
the common sparsity pattern in the received pilot and data signal has been
ignored in most existing studies, and auxiliary information of channel decoding
has not been utilized for user activity detection. This paper endeavors to
develop advanced receivers in a holistic manner for joint activity detection,
channel estimation, and data decoding. In particular, a turbo receiver based on
the bilinear generalized approximate message passing (BiG-AMP) algorithm is
developed. In this receiver, all the received symbols will be utilized to
jointly estimate the channel state, user activity, and soft data symbols, which
effectively exploits the common sparsity pattern. Meanwhile, the extrinsic
information from the channel decoder will assist the joint channel estimation
and data detection. To reduce the complexity, a low-cost side information-aided
receiver is also proposed, where the channel decoder provides side information
to update the estimates on whether a user is active or not. Simulation results
show that the turbo receiver is able to reduce the activity detection, channel
estimation, and data decoding errors effectively, while the side
information-aided receiver notably outperforms the conventional method with a
relatively low complexity
Task-Oriented Communication for Multi-Device Cooperative Edge Inference
This paper investigates task-oriented communication for multi-device
cooperative edge inference, where a group of distributed low-end edge devices
transmit the extracted features of local samples to a powerful edge server for
inference. While cooperative edge inference can overcome the limited sensing
capability of a single device, it substantially increases the communication
overhead and may incur excessive latency. To enable low-latency cooperative
inference, we propose a learning-based communication scheme that optimizes
local feature extraction and distributed feature encoding in a task-oriented
manner, i.e., to remove data redundancy and transmit information that is
essential for the downstream inference task rather than reconstructing the data
samples at the edge server. Specifically, we leverage an information bottleneck
(IB) principle to extract the task-relevant feature at each edge device and
adopt a distributed information bottleneck (DIB) framework to formalize a
single-letter characterization of the optimal rate-relevance tradeoff for
distributed feature encoding. To admit flexible control of the communication
overhead, we extend the DIB framework to a distributed deterministic
information bottleneck (DDIB) objective that explicitly incorporates the
representational costs of the encoded features. As the IB-based objectives are
computationally prohibitive for high-dimensional data, we adopt variational
approximations to make the optimization problems tractable. To compensate the
potential performance loss due to the variational approximations, we also
develop a selective retransmission (SR) mechanism to identify the redundancy in
the encoded features of multiple edge devices to attain additional
communication overhead reduction. Extensive experiments evidence that the
proposed task-oriented communication scheme achieves a better rate-relevance
tradeoff than baseline methods.Comment: This paper was accepted to IEEE Transactions on Wireless
Communicatio
MimiC: Combating Client Dropouts in Federated Learning by Mimicking Central Updates
Federated learning (FL) is a promising framework for privacy-preserving
collaborative learning, where model training tasks are distributed to clients
and only the model updates need to be collected at a server. However, when
being deployed at mobile edge networks, clients may have unpredictable
availability and drop out of the training process, which hinders the
convergence of FL. This paper tackles such a critical challenge. Specifically,
we first investigate the convergence of the classical FedAvg algorithm with
arbitrary client dropouts. We find that with the common choice of a decaying
learning rate, FedAvg oscillates around a stationary point of the global loss
function, which is caused by the divergence between the aggregated and desired
central update. Motivated by this new observation, we then design a novel
training algorithm named MimiC, where the server modifies each received model
update based on the previous ones. The proposed modification of the received
model updates mimics the imaginary central update irrespective of dropout
clients. The theoretical analysis of MimiC shows that divergence between the
aggregated and central update diminishes with proper learning rates, leading to
its convergence. Simulation results further demonstrate that MimiC maintains
stable convergence performance and learns better models than the baseline
methods
Branchy-GNN: a Device-Edge Co-Inference Framework for Efficient Point Cloud Processing
The recent advancements of three-dimensional (3D) data acquisition devices
have spurred a new breed of applications that rely on point cloud data
processing. However, processing a large volume of point cloud data brings a
significant workload on resource-constrained mobile devices, prohibiting from
unleashing their full potentials. Built upon the emerging paradigm of
device-edge co-inference, where an edge device extracts and transmits the
intermediate feature to an edge server for further processing, we propose
Branchy-GNN for efficient graph neural network (GNN) based point cloud
processing by leveraging edge computing platforms. In order to reduce the
on-device computational cost, the Branchy-GNN adds branch networks for early
exiting. Besides, it employs learning-based joint source-channel coding (JSCC)
for the intermediate feature compression to reduce the communication overhead.
Our experimental results demonstrate that the proposed Branchy-GNN secures a
significant latency reduction compared with several benchmark methods
How Robust is Federated Learning to Communication Error? A Comparison Study Between Uplink and Downlink Channels
Because of its privacy-preserving capability, federated learning (FL) has
attracted significant attention from both academia and industry. However, when
being implemented over wireless networks, it is not clear how much
communication error can be tolerated by FL. This paper investigates the
robustness of FL to the uplink and downlink communication error. Our
theoretical analysis reveals that the robustness depends on two critical
parameters, namely the number of clients and the numerical range of model
parameters. It is also shown that the uplink communication in FL can tolerate a
higher bit error rate (BER) than downlink communication, and this difference is
quantified by a proposed formula. The findings and theoretical analyses are
further validated by extensive experiments.Comment: Submitted to IEEE for possible publicatio
- …