30,209 research outputs found
LT Code Design for Inactivation Decoding
We present a simple model of inactivation decoding for LT codes which can be
used to estimate the decoding complexity as a function of the LT code degree
distribution. The model is shown to be accurate in variety of settings of
practical importance. The proposed method allows to perform a numerical
optimization on the degree distribution of a LT code aiming at minimizing the
number of inactivations required for decoding.Comment: 6 pages, 7 figure
Topological Order and Memory Time in Marginally Self-Correcting Quantum Memory
We examine two proposals for marginally self-correcting quantum memory, the
cubic code by Haah and the welded code by Michnicki. In particular, we prove
explicitly that they are absent of topological order above zero temperature, as
their Gibbs ensembles can be prepared via a short-depth quantum circuit from
classical ensembles. Our proof technique naturally gives rise to the notion of
free energy associated with excitations. Further, we develop a framework for an
ergodic decomposition of Davies generators in CSS codes which enables formal
reduction to simpler classical memory problems. We then show that memory time
in the welded code is doubly exponential in inverse temperature via the Peierls
argument. These results introduce further connections between thermal
topological order and self-correction from the viewpoint of free energy and
quantum circuit depth.Comment: 19 pages, 18 figure
Design and Analysis of LT Codes with Decreasing Ripple Size
In this paper we propose a new design of LT codes, which decreases the amount
of necessary overhead in comparison to existing designs. The design focuses on
a parameter of the LT decoding process called the ripple size. This parameter
was also a key element in the design proposed in the original work by Luby.
Specifically, Luby argued that an LT code should provide a constant ripple size
during decoding. In this work we show that the ripple size should decrease
during decoding, in order to reduce the necessary overhead. Initially we
motivate this claim by analytical results related to the redundancy within an
LT code. We then propose a new design procedure, which can provide any desired
achievable decreasing ripple size. The new design procedure is evaluated and
compared to the current state of the art through simulations. This reveals a
significant increase in performance with respect to both average overhead and
error probability at any fixed overhead
Energy Requirements for Quantum Data Compression and 1-1 Coding
By looking at quantum data compression in the second quantisation, we present
a new model for the efficient generation and use of variable length codes. In
this picture lossless data compression can be seen as the {\em minimum energy}
required to faithfully represent or transmit classical information contained
within a quantum state.
In order to represent information we create quanta in some predefined modes
(i.e. frequencies) prepared in one of two possible internal states (the
information carrying degrees of freedom). Data compression is now seen as the
selective annihilation of these quanta, the energy of whom is effectively
dissipated into the environment. As any increase in the energy of the
environment is intricately linked to any information loss and is subject to
Landauer's erasure principle, we use this principle to distinguish lossless and
lossy schemes and to suggest bounds on the efficiency of our lossless
compression protocol.
In line with the work of Bostr\"{o}m and Felbinger \cite{bostroem}, we also
show that when using variable length codes the classical notions of prefix or
uniquely decipherable codes are unnecessarily restrictive given the structure
of quantum mechanics and that a 1-1 mapping is sufficient. In the absence of
this restraint we translate existing classical results on 1-1 coding to the
quantum domain to derive a new upper bound on the compression of quantum
information. Finally we present a simple quantum circuit to implement our
scheme.Comment: 10 pages, 5 figure
Inactivation Decoding of LT and Raptor Codes: Analysis and Code Design
In this paper we analyze LT and Raptor codes under inactivation decoding. A
first order analysis is introduced, which provides the expected number of
inactivations for an LT code, as a function of the output distribution, the
number of input symbols and the decoding overhead. The analysis is then
extended to the calculation of the distribution of the number of inactivations.
In both cases, random inactivation is assumed. The developed analytical tools
are then exploited to design LT and Raptor codes, enabling a tight control on
the decoding complexity vs. failure probability trade-off. The accuracy of the
approach is confirmed by numerical simulations.Comment: Accepted for publication in IEEE Transactions on Communication
FoCaLiZe: Inside an F-IDE
For years, Integrated Development Environments have demonstrated their
usefulness in order to ease the development of software. High-level security or
safety systems require proofs of compliance to standards, based on analyses
such as code review and, increasingly nowadays, formal proofs of conformance to
specifications. This implies mixing computational and logical aspects all along
the development, which naturally raises the need for a notion of Formal IDE.
This paper examines the FoCaLiZe environment and explores the implementation
issues raised by the decision to provide a single language to express
specification properties, source code and machine-checked proofs while allowing
incremental development and code reusability. Such features create strong
dependencies between functions, properties and proofs, and impose an particular
compilation scheme, which is described here. The compilation results are
runnable OCaml code and a checkable Coq term. All these points are illustrated
through a running example.Comment: In Proceedings F-IDE 2014, arXiv:1404.578
- …