1,731 research outputs found
Diagnosis of weaknesses in modern error correction codes: a physics approach
One of the main obstacles to the wider use of the modern error-correction
codes is that, due to the complex behavior of their decoding algorithms, no
systematic method which would allow characterization of the Bit-Error-Rate
(BER) is known. This is especially true at the weak noise where many systems
operate and where coding performance is difficult to estimate because of the
diminishingly small number of errors. We show how the instanton method of
physics allows one to solve the problem of BER analysis in the weak noise range
by recasting it as a computationally tractable minimization problem.Comment: 9 pages, 8 figure
Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping
In this paper, we provide for the first time a systematic comparison of
distribution matching (DM) and sphere shaping (SpSh) algorithms for short
blocklength probabilistic amplitude shaping. For asymptotically large
blocklengths, constant composition distribution matching (CCDM) is known to
generate the target capacity-achieving distribution. As the blocklength
decreases, however, the resulting rate loss diminishes the efficiency of CCDM.
We claim that for such short blocklengths and over the additive white Gaussian
channel (AWGN), the objective of shaping should be reformulated as obtaining
the most energy-efficient signal space for a given rate (rather than matching
distributions). In light of this interpretation, multiset-partition DM (MPDM),
enumerative sphere shaping (ESS) and shell mapping (SM), are reviewed as
energy-efficient shaping techniques. Numerical results show that MPDM and SpSh
have smaller rate losses than CCDM. SpSh--whose sole objective is to maximize
the energy efficiency--is shown to have the minimum rate loss amongst all. We
provide simulation results of the end-to-end decoding performance showing that
up to 1 dB improvement in power efficiency over uniform signaling can be
obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a
discussion on the complexity of these algorithms from the perspective of
latency, storage and computations.Comment: 18 pages, 10 figure
Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures
Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs
Energy-Efficient Soft-Assisted Product Decoders
We implement a 1-Tb/s 0.63-pJ/bit soft-assisted product decoder in a 28-nm
technology. The decoder uses one bit of soft information to improve its net
coding gain by 0.2 dB, reaching 10.3-10.4 dB, which is similar to that of more
complex hard-decision staircase decoders
- …