73,606 research outputs found
Lossy Compression with Near-uniform Encoder Outputs
It is well known that lossless compression of a discrete memoryless source
with near-uniform encoder output is possible at a rate above its entropy if and
only if the encoder is randomized. This work focuses on deriving conditions for
near-uniform encoder output(s) in the Wyner-Ziv and the distributed lossy
compression problems. We show that in the Wyner-Ziv problem, near-uniform
encoder output and operation close to the WZ-rate limit is simultaneously
possible, whereas in the distributed lossy compression problem, jointly
near-uniform outputs is achievable in the interior of the distributed lossy
compression rate region if the sources share non-trivial G\'{a}cs-K\"{o}rner
common information.Comment: Submitted to the 2016 IEEE International Symposium on Information
Theory (11 Pages, 3 Figures
Coding Schemes for Achieving Strong Secrecy at Negligible Cost
We study the problem of achieving strong secrecy over wiretap channels at
negligible cost, in the sense of maintaining the overall communication rate of
the same channel without secrecy constraints. Specifically, we propose and
analyze two source-channel coding architectures, in which secrecy is achieved
by multiplexing public and confidential messages. In both cases, our main
contribution is to show that secrecy can be achieved without compromising
communication rate and by requiring only randomness of asymptotically vanishing
rate. Our first source-channel coding architecture relies on a modified wiretap
channel code, in which randomization is performed using the output of a source
code. In contrast, our second architecture relies on a standard wiretap code
combined with a modified source code termed uniform compression code, in which
a small shared secret seed is used to enhance the uniformity of the source code
output. We carry out a detailed analysis of uniform compression codes and
characterize the optimal size of the shared seed.Comment: 15 pages, two-column, 5 figures, accepted to IEEE Transactions on
Information Theor
Numerical Analysis of Boosting Scheme for Scalable NMR Quantum Computation
Among initialization schemes for ensemble quantum computation beginning at
thermal equilibrium, the scheme proposed by Schulman and Vazirani [L. J.
Schulman and U. V. Vazirani, in Proceedings of the 31st ACM Symposium on Theory
of Computing (STOC'99) (ACM Press, New York, 1999), pp. 322-329] is known for
the simple quantum circuit to redistribute the biases (polarizations) of qubits
and small time complexity. However, our numerical simulation shows that the
number of qubits initialized by the scheme is rather smaller than expected from
the von Neumann entropy because of an increase in the sum of the binary
entropies of individual qubits, which indicates a growth in the total classical
correlation. This result--namely, that there is such a significant growth in
the total binary entropy--disagrees with that of their analysis.Comment: 14 pages, 18 figures, RevTeX4, v2,v3: typos corrected, v4: minor
changes in PROGRAM 1, conforming it to the actual programs used in the
simulation, v5: correction of a typographical error in the inequality sign in
PROGRAM 1, v6: this version contains a new section on classical correlations,
v7: correction of a wrong use of terminology, v8: Appendix A has been added,
v9: published in PR
On Macroscopic Complexity and Perceptual Coding
The theoretical limits of 'lossy' data compression algorithms are considered.
The complexity of an object as seen by a macroscopic observer is the size of
the perceptual code which discards all information that can be lost without
altering the perception of the specified observer. The complexity of this
macroscopically observed state is the simplest description of any microstate
comprising that macrostate. Inference and pattern recognition based on
macrostate rather than microstate complexities will take advantage of the
complexity of the macroscopic observer to ignore irrelevant noise
Optimization of Planck/LFI on--board data handling
To asses stability against 1/f noise, the Low Frequency Instrument (LFI)
onboard the Planck mission will acquire data at a rate much higher than the
data rate allowed by its telemetry bandwith of 35.5 kbps. The data are
processed by an onboard pipeline, followed onground by a reversing step. This
paper illustrates the LFI scientific onboard processing to fit the allowed
datarate. This is a lossy process tuned by using a set of 5 parameters Naver,
r1, r2, q, O for each of the 44 LFI detectors. The paper quantifies the level
of distortion introduced by the onboard processing, EpsilonQ, as a function of
these parameters. It describes the method of optimizing the onboard processing
chain. The tuning procedure is based on a optimization algorithm applied to
unprocessed and uncompressed raw data provided either by simulations, prelaunch
tests or data taken from LFI operating in diagnostic mode. All the needed
optimization steps are performed by an automated tool, OCA2, which ends with
optimized parameters and produces a set of statistical indicators, among them
the compression rate Cr and EpsilonQ. For Planck/LFI the requirements are Cr =
2.4 and EpsilonQ <= 10% of the rms of the instrumental white noise. To speedup
the process an analytical model is developed that is able to extract most of
the relevant information on EpsilonQ and Cr as a function of the signal
statistics and the processing parameters. This model will be of interest for
the instrument data analysis. The method was applied during ground tests when
the instrument was operating in conditions representative of flight. Optimized
parameters were obtained and the performance has been verified, the required
data rate of 35.5 Kbps has been achieved while keeping EpsilonQ at a level of
3.8% of white noise rms well within the requirements.Comment: 51 pages, 13 fig.s, 3 tables, pdflatex, needs JINST.csl, graphicx,
txfonts, rotating; Issue 1.0 10 nov 2009; Sub. to JINST 23Jun09, Accepted
10Nov09, Pub.: 29Dec09; This is a preprint, not the final versio
- …