1,111 research outputs found
Sampling versus Random Binning for Multiple Descriptions of a Bandlimited Source
Random binning is an efficient, yet complex, coding technique for the
symmetric L-description source coding problem. We propose an alternative
approach, that uses the quantized samples of a bandlimited source as
"descriptions". By the Nyquist condition, the source can be reconstructed if
enough samples are received. We examine a coding scheme that combines sampling
and noise-shaped quantization for a scenario in which only K < L descriptions
or all L descriptions are received. Some of the received K-sets of descriptions
correspond to uniform sampling while others to non-uniform sampling. This
scheme achieves the optimum rate-distortion performance for uniform-sampling
K-sets, but suffers noise amplification for nonuniform-sampling K-sets. We then
show that by increasing the sampling rate and adding a random-binning stage,
the optimal operation point is achieved for any K-set.Comment: Presented at the ITW'13. 5 pages, two-column mode, 3 figure
Quantization Noise Shaping for Information Maximizing ADCs
ADCs sit at the interface of the analog and digital worlds and fundamentally
determine what information is available in the digital domain for processing.
This paper shows that a configurable ADC can be designed for signals with non
constant information as a function of frequency such that within a fixed power
budget the ADC maximizes the information in the converted signal by frequency
shaping the quantization noise. Quantization noise shaping can be realized via
loop filter design for a single channel delta sigma ADC and extended to common
time and frequency interleaved multi channel structures. Results are presented
for example wireline and wireless style channels.Comment: 4 pages, 6 figure
High-resolution distributed sampling of bandlimited fields with low-precision sensors
The problem of sampling a discrete-time sequence of spatially bandlimited
fields with a bounded dynamic range, in a distributed,
communication-constrained, processing environment is addressed. A central unit,
having access to the data gathered by a dense network of fixed-precision
sensors, operating under stringent inter-node communication constraints, is
required to reconstruct the field snapshots to maximum accuracy. Both
deterministic and stochastic field models are considered. For stochastic
fields, results are established in the almost-sure sense. The feasibility of
having a flexible tradeoff between the oversampling rate (sensor density) and
the analog-to-digital converter (ADC) precision, while achieving an exponential
accuracy in the number of bits per Nyquist-interval per snapshot is
demonstrated. This exposes an underlying ``conservation of bits'' principle:
the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed
along the amplitude axis (sensor-precision) and space (sensor density) in an
almost arbitrary discrete-valued manner, while retaining the same (exponential)
distortion-rate characteristics. Achievable information scaling laws for field
reconstruction over a bounded region are also derived: With N one-bit sensors
per Nyquist-interval, Nyquist-intervals, and total network
bitrate (per-sensor bitrate ), the maximum pointwise distortion goes to zero as
or . This is shown to be possible
with only nearest-neighbor communication, distributed coding, and appropriate
interpolation algorithms. For a fixed, nonzero target distortion, the number of
fixed-precision sensors and the network rate needed is always finite.Comment: 17 pages, 6 figures; paper withdrawn from IEEE Transactions on Signal
Processing and re-submitted to the IEEE Transactions on Information Theor
ITOS VHRR on-board data compression study
Data compression methods for ITOS VHRR data were studied for a tape recorder record-and playback application. A playback period of 9 minutes was assumed with a nominal 18 minute record period for a 2-to-1 compression ratio. Both analog and digital methods were considered with the conclusion that digital methods should be used. Two system designs were prepared. One is a PCM system and the other is an entropy-coded predictive-quantization, sometimes called entropy-coded DPCM or just DPCM, system. Both systems use data management principles to transmit only the necessary data. Both systems use a medium capacity standard tape recorder from specifications provided by the technical officer. The 10 to the 9th power bit capacity of the recorder is the basic limitation on the compression ratio. Both systems achieve the minimum desired 2 to 1 compression ratio. A slower playback rate can be used with the DPCM system due to a higher compression factor for better link performance at a given CNR in terms of bandwidth utilization and error rate. The report is divided into two parts. The first part summarizes the theoretical conclusions of the second part and presents the system diagrams. The second part is a detailed analysis based upon an empirically derived random process model arrived at from specifications and measured data provided by the technical officer
Time-encoding analog-to-digital converters : bridging the analog gap to advanced digital CMOS? Part 2: architectures and circuits
The scaling of CMOS technology deep into the nanometer range has created challenges for the design of highperformance analog ICs: they remain large in area and power consumption in spite of process scaling. Analog circuits based on time encoding [1], [2], where the signal information is encoded in the waveform transitions instead of its amplitude, have been developed to overcome these issues. While part one of this overview article [3] presented the basic principles of time encoding, this follow-up article describes and compares the main time-encoding architectures for analog-to-digital converters (ADCs) and discusses the corresponding design challenges of the circuit blocks. The focus is on structures that avoid, as much as possible, the use of traditional analog blocks like operational amplifiers (opamps) or comparators but instead use digital circuitry, ring oscillators, flip-flops, counters, an so on. Our overview of the state of the art will show that these circuits can achieve excellent performance. The obvious benefit of this highly digital approach to realizing analog functionality is that the resulting circuits are small in area and more compatible with CMOS process scaling. The approach also allows for the easy integration of these analog functions in systems on chip operating at "digital" supply voltages as low as 1V and lower. A large part of the design process can also be embedded in a standard digital synthesis flow
Fusing Censored Dependent Data for Distributed Detection
In this paper, we consider a distributed detection problem for a censoring
sensor network where each sensor's communication rate is significantly reduced
by transmitting only "informative" observations to the Fusion Center (FC), and
censoring those deemed "uninformative". While the independence of data from
censoring sensors is often assumed in previous research, we explore spatial
dependence among observations. Our focus is on designing the fusion rule under
the Neyman-Pearson (NP) framework that takes into account the spatial
dependence among observations. Two transmission scenarios are considered, one
where uncensored observations are transmitted directly to the FC and second
where they are first quantized and then transmitted to further improve
transmission efficiency. Copula-based Generalized Likelihood Ratio Test (GLRT)
for censored data is proposed with both continuous and discrete messages
received at the FC corresponding to different transmission strategies. We
address the computational issues of the copula-based GLRTs involving
multidimensional integrals by presenting more efficient fusion rules, based on
the key idea of injecting controlled noise at the FC before fusion. Although,
the signal-to-noise ratio (SNR) is reduced by introducing controlled noise at
the receiver, simulation results demonstrate that the resulting noise-aided
fusion approach based on adding artificial noise performs very closely to the
exact copula-based GLRTs. Copula-based GLRTs and their noise-aided counterparts
by exploiting the spatial dependence greatly improve detection performance
compared with the fusion rule under independence assumption
- …