15,138 research outputs found
Trading quantum for classical resources in quantum data compression
We study the visible compression of a source E of pure quantum signal states,
or, more formally, the minimal resources per signal required to represent
arbitrarily long strings of signals with arbitrarily high fidelity, when the
compressor is given the identity of the input state sequence as classical
information. According to the quantum source coding theorem, the optimal
quantum rate is the von Neumann entropy S(E) qubits per signal.
We develop a refinement of this theorem in order to analyze the situation in
which the states are coded into classical and quantum bits that are quantified
separately. This leads to a trade--off curve Q(R), where Q(R) qubits per signal
is the optimal quantum rate for a given classical rate of R bits per signal.
Our main result is an explicit characterization of this trade--off function
by a simple formula in terms of only single signal, perfect fidelity encodings
of the source. We give a thorough discussion of many further mathematical
properties of our formula, including an analysis of its behavior for group
covariant sources and a generalization to sources with continuously
parameterized states. We also show that our result leads to a number of
corollaries characterizing the trade--off between information gain and state
disturbance for quantum sources. In addition, we indicate how our techniques
also provide a solution to the so--called remote state preparation problem.
Finally, we develop a probability--free version of our main result which may be
interpreted as an answer to the question: ``How many classical bits does a
qubit cost?'' This theorem provides a type of dual to Holevo's theorem, insofar
as the latter characterizes the cost of coding classical bits into qubits.Comment: 51 pages, 7 figure
Communication over an Arbitrarily Varying Channel under a State-Myopic Encoder
We study the problem of communication over a discrete arbitrarily varying
channel (AVC) when a noisy version of the state is known non-causally at the
encoder. The state is chosen by an adversary which knows the coding scheme. A
state-myopic encoder observes this state non-causally, though imperfectly,
through a noisy discrete memoryless channel (DMC). We first characterize the
capacity of this state-dependent channel when the encoder-decoder share
randomness unknown to the adversary, i.e., the randomized coding capacity.
Next, we show that when only the encoder is allowed to randomize, the capacity
remains unchanged when positive. Interesting and well-known special cases of
the state-myopic encoder model are also presented.Comment: 16 page
DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression
We propose a new architecture for distributed image compression from a group
of distributed data sources. The work is motivated by practical needs of
data-driven codec design, low power consumption, robustness, and data privacy.
The proposed architecture, which we refer to as Distributed Recurrent
Autoencoder for Scalable Image Compression (DRASIC), is able to train
distributed encoders and one joint decoder on correlated data sources. Its
compression capability is much better than the method of training codecs
separately. Meanwhile, the performance of our distributed system with 10
distributed sources is only within 2 dB peak signal-to-noise ratio (PSNR) of
the performance of a single codec trained with all data sources. We experiment
distributed sources with different correlations and show how our data-driven
methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding
(DSC). To the best of our knowledge, this is the first data-driven DSC
framework for general distributed code design with deep learning
Suboptimality of the Karhunen-Loève transform for transform coding
We examine the performance of the Karhunen-Loeve transform (KLT) for transform coding applications. The KLT has long been viewed as the best available block transform for a system that orthogonally transforms a vector source, scalar quantizes the components of the transformed vector using optimal bit allocation, and then inverse transforms the vector. This paper treats fixed-rate and variable-rate transform codes of non-Gaussian sources. The fixed-rate approach uses an optimal fixed-rate scalar quantizer to describe the transform coefficients; the variable-rate approach uses a uniform scalar quantizer followed by an optimal entropy code, and each quantized component is encoded separately. Earlier work shows that for the variable-rate case there exist sources on which the KLT is not unique and the optimal quantization and coding stage matched to a "worst" KLT yields performance as much as 1.5 dB worse than the optimal quantization and coding stage matched to a "best" KLT. In this paper, we strengthen that result to show that in both the fixed-rate and the variable-rate coding frameworks there exist sources for which the performance penalty for using a "worst" KLT can be made arbitrarily large. Further, we demonstrate in both frameworks that there exist sources for which even a best KLT gives suboptimal performance. Finally, we show that even for vector sources where the KLT yields independent coefficients, the KLT can be suboptimal for fixed-rate coding
Programmable Spectrometry -- Per-pixel Classification of Materials using Learned Spectral Filters
Many materials have distinct spectral profiles. This facilitates estimation
of the material composition of a scene at each pixel by first acquiring its
hyperspectral image, and subsequently filtering it using a bank of spectral
profiles. This process is inherently wasteful since only a set of linear
projections of the acquired measurements contribute to the classification task.
We propose a novel programmable camera that is capable of producing images of a
scene with an arbitrary spectral filter. We use this camera to optically
implement the spectral filtering of the scene's hyperspectral image with the
bank of spectral profiles needed to perform per-pixel material classification.
This provides gains both in terms of acquisition speed --- since only the
relevant measurements are acquired --- and in signal-to-noise ratio --- since
we invariably avoid narrowband filters that are light inefficient. Given
training data, we use a range of classical and modern techniques including SVMs
and neural networks to identify the bank of spectral profiles that facilitate
material classification. We verify the method in simulations on standard
datasets as well as real data using a lab prototype of the camera
Remote preparation of quantum states
Remote state preparation is the variant of quantum state teleportation in
which the sender knows the quantum state to be communicated. The original paper
introducing teleportation established minimal requirements for classical
communication and entanglement but the corresponding limits for remote state
preparation have remained unknown until now: previous work has shown, however,
that it not only requires less classical communication but also gives rise to a
trade-off between these two resources in the appropriate setting. We discuss
this problem from first principles, including the various choices one may
follow in the definitions of the actual resources. Our main result is a general
method of remote state preparation for arbitrary states of many qubits, at a
cost of 1 bit of classical communication and 1 bit of entanglement per qubit
sent. In this "universal" formulation, these ebit and cbit requirements are
shown to be simultaneously optimal by exhibiting a dichotomy. Our protocol then
yields the exact trade-off curve for arbitrary ensembles of pure states and
pure entangled states (including the case of incomplete knowledge of the
ensemble probabilities), based on the recently established quantum-classical
trade-off for quantum data compression. The paper includes an extensive
discussion of our results, including the impact of the choice of model on the
resources, the topic of obliviousness, and an application to private quantum
channels and quantum data hiding.Comment: 21 pages plus 2 figures (eps), revtex4. v2 corrects some errors and
adds obliviousness discussion. v3 has section VI C deleted and various minor
oversights correcte
Software Defined Media: Virtualization of Audio-Visual Services
Internet-native audio-visual services are witnessing rapid development. Among
these services, object-based audio-visual services are gaining importance. In
2014, we established the Software Defined Media (SDM) consortium to target new
research areas and markets involving object-based digital media and
Internet-by-design audio-visual environments. In this paper, we introduce the
SDM architecture that virtualizes networked audio-visual services along with
the development of smart buildings and smart cities using Internet of Things
(IoT) devices and smart building facilities. Moreover, we design the SDM
architecture as a layered architecture to promote the development of innovative
applications on the basis of rapid advancements in software-defined networking
(SDN). Then, we implement a prototype system based on the architecture, present
the system at an exhibition, and provide it as an SDM API to application
developers at hackathons. Various types of applications are developed using the
API at these events. An evaluation of SDM API access shows that the prototype
SDM platform effectively provides 3D audio reproducibility and interactiveness
for SDM applications.Comment: IEEE International Conference on Communications (ICC2017), Paris,
France, 21-25 May 201
- …