144 research outputs found
Graded quantization for multiple description coding of compressive measurements
Compressed sensing (CS) is an emerging paradigm for acquisition of compressed
representations of a sparse signal. Its low complexity is appealing for
resource-constrained scenarios like sensor networks. However, such scenarios
are often coupled with unreliable communication channels and providing robust
transmission of the acquired data to a receiver is an issue. Multiple
description coding (MDC) effectively combats channel losses for systems without
feedback, thus raising the interest in developing MDC methods explicitly
designed for the CS framework, and exploiting its properties. We propose a
method called Graded Quantization (CS-GQ) that leverages the democratic
property of compressive measurements to effectively implement MDC, and we
provide methods to optimize its performance. A novel decoding algorithm based
on the alternating directions method of multipliers is derived to reconstruct
signals from a limited number of received descriptions. Simulations are
performed to assess the performance of CS-GQ against other methods in presence
of packet losses. The proposed method is successful at providing robust coding
of CS measurements and outperforms other schemes for the considered test
metrics
Interference Coordination via Power Domain Channel Estimation
A novel technique is proposed which enables each transmitter to acquire
global channel state information (CSI) from the sole knowledge of individual
received signal power measurements, which makes dedicated feedback or
inter-transmitter signaling channels unnecessary. To make this possible, we
resort to a completely new technique whose key idea is to exploit the transmit
power levels as symbols to embed information and the observed interference as a
communication channel the transmitters can use to exchange coordination
information. Although the used technique allows any kind of {low-rate}
information to be exchanged among the transmitters, the focus here is to
exchange local CSI. The proposed procedure also comprises a phase which allows
local CSI to be estimated. Once an estimate of global CSI is acquired by the
transmitters, it can be used to optimize any utility function which depends on
it. While algorithms which use the same type of measurements such as the
iterative water-filling algorithm (IWFA) implement the sequential best-response
dynamics (BRD) applied to individual utilities, here, thanks to the
availability of global CSI, the BRD can be applied to the sum-utility.
Extensive numerical results show that significant gains can be obtained and,
this, by requiring no additional online signaling
Source Coding Optimization for Distributed Average Consensus
Consensus is a common method for computing a function of the data distributed
among the nodes of a network. Of particular interest is distributed average
consensus, whereby the nodes iteratively compute the sample average of the data
stored at all the nodes of the network using only near-neighbor communications.
In real-world scenarios, these communications must undergo quantization, which
introduces distortion to the internode messages. In this thesis, a model for
the evolution of the network state statistics at each iteration is developed
under the assumptions of Gaussian data and additive quantization error. It is
shown that minimization of the communication load in terms of aggregate source
coding rate can be posed as a generalized geometric program, for which an
equivalent convex optimization can efficiently solve for the global minimum.
Optimization procedures are developed for rate-distortion-optimal vector
quantization, uniform entropy-coded scalar quantization, and fixed-rate uniform
quantization. Numerical results demonstrate the performance of these
approaches. For small numbers of iterations, the fixed-rate optimizations are
verified using exhaustive search. Comparison to the prior art suggests
competitive performance under certain circumstances but strongly motivates the
incorporation of more sophisticated coding strategies, such as differential,
predictive, or Wyner-Ziv coding.Comment: Master's Thesis, Electrical Engineering, North Carolina State
Universit
Training Transformers with 4-bit Integers
Quantizing the activation, weight, and gradient to 4-bit is promising to
accelerate neural network training. However, existing 4-bit training methods
require custom numerical formats which are not supported by contemporary
hardware. In this work, we propose a training method for transformers with all
matrix multiplications implemented with the INT4 arithmetic. Training with an
ultra-low INT4 precision is challenging. To achieve this, we carefully analyze
the specific structures of activation and gradients in transformers to propose
dedicated quantizers for them. For forward propagation, we identify the
challenge of outliers and propose a Hadamard quantizer to suppress the
outliers. For backpropagation, we leverage the structural sparsity of gradients
by proposing bit splitting and leverage score sampling techniques to quantize
gradients accurately. Our algorithm achieves competitive accuracy on a wide
range of tasks including natural language understanding, machine translation,
and image classification. Unlike previous 4-bit training methods, our algorithm
can be implemented on the current generation of GPUs. Our prototypical linear
operator implementation is up to 2.2 times faster than the FP16 counterparts
and speeds up the training by up to 35.1%.Comment: 9 pages, 8 figure
- …