1,986 research outputs found
Deep Signal Recovery with One-Bit Quantization
Machine learning, and more specifically deep learning, have shown remarkable
performance in sensing, communications, and inference. In this paper, we
consider the application of the deep unfolding technique in the problem of
signal reconstruction from its one-bit noisy measurements. Namely, we propose a
model-based machine learning method and unfold the iterations of an inference
optimization algorithm into the layers of a deep neural network for one-bit
signal recovery. The resulting network, which we refer to as DeepRec, can
efficiently handle the recovery of high-dimensional signals from acquired
one-bit noisy measurements. The proposed method results in an improvement in
accuracy and computational efficiency with respect to the original framework as
shown through numerical analysis.Comment: This paper has been submitted to the 44th International Conference on
Acoustics, Speech, and Signal Processing (ICASSP 2019
Accelerating and Compressing Deep Neural Networks for Massive MIMO CSI Feedback
The recent advances in machine learning and deep neural networks have made
them attractive candidates for wireless communications functions such as
channel estimation, decoding, and downlink channel state information (CSI)
compression. However, most of these neural networks are large and inefficient
making it a barrier for deployment in practical wireless systems that require
low-latency and low memory footprints for individual network functions. To
mitigate these limitations, we propose accelerated and compressed efficient
neural networks for massive MIMO CSI feedback. Specifically, we have thoroughly
investigated the adoption of network pruning, post-training dynamic range
quantization, and weight clustering to optimize CSI feedback compression for
massive MIMO systems. Furthermore, we have deployed the proposed model
compression techniques on commodity hardware and demonstrated that in order to
achieve inference gains, specialized libraries that accelerate computations for
sparse neural networks are required. Our findings indicate that there is
remarkable value in applying these model compression techniques and the
proposed joint pruning and quantization approach reduced model size by 86.5%
and inference time by 76.2% with minimal impact to model accuracy. These
compression methods are crucial to pave the way for practical adoption and
deployments of deep learning-based techniques in commercial wireless systems.Comment: IEEE ICC 2023 Conferenc
- …