2 research outputs found
Distributed Quantization for Sparse Time Sequences
Analog signals processed in digital hardware are quantized into a discrete
bit-constrained representation. Quantization is typically carried out using
analog-to-digital converters (ADCs), operating in a serial scalar manner. In
some applications, a set of analog signals are acquired individually and
processed jointly. Such setups are referred to as distributed quantization. In
this work, we propose a distributed quantization scheme for representing a set
of sparse time sequences acquired using conventional scalar ADCs. Our approach
utilizes tools from secure group testing theory to exploit the sparse nature of
the acquired analog signals, obtaining a compact and accurate representation
while operating in a distributed fashion. We then show how our technique can be
implemented when the quantized signals are transmitted over a multi-hop
communication network providing a low-complexity network policy for routing and
signal recovery. Our numerical evaluations demonstrate that the proposed scheme
notably outperforms conventional methods based on the combination of
quantization and compressed sensing tools
A Distributed Computationally Aware Quantizer Design via Hyper Binning
We design a distributed function aware quantization scheme for distributed
functional compression. We consider correlated sources and and
a destination that seeks the outcome of a continuous function .
We develop a compression scheme called hyper binning in order to quantize
via minimizing entropy of joint source partitioning. Hyper binning is a natural
generalization of Cover's random code construction for the asymptotically
optimal Slepian-Wolf encoding scheme that makes use of orthogonal binning. The
key idea behind this approach is to use linear discriminant analysis in order
to characterize different source feature combinations. This scheme captures the
correlation between the sources and function's structure as a means of
dimensionality reduction. We investigate the performance of hyper binning for
different source distributions, and identify which classes of sources entail
more partitioning to achieve better function approximation. Our approach brings
an information theory perspective to the traditional vector quantization
technique from signal processing