13,651 research outputs found
Decoding of Non-Binary LDPC Codes Using the Information Bottleneck Method
Recently, a novel lookup table based decoding method for binary low-density
parity-check codes has attracted considerable attention. In this approach,
mutual-information maximizing lookup tables replace the conventional operations
of the variable nodes and the check nodes in message passing decoding.
Moreover, the exchanged messages are represented by integers with very small
bit width. A machine learning framework termed the information bottleneck
method is used to design the corresponding lookup tables. In this paper, we
extend this decoding principle from binary to non-binary codes. This is not a
straightforward extension, but requires a more sophisticated lookup table
design to cope with the arithmetic in higher order Galois fields. Provided bit
error rate simulations show that our proposed scheme outperforms the log-max
decoding algorithm and operates close to sum-product decoding.Comment: This paper has been presented at IEEE International Conference on
Communications (ICC'19) in Shangha
The Study of Properties of n-D Analytic Signals and Their Spectra in Complex and Hypercomplex Domains
In the paper, two various representations of a n-dimensional (n-D) real signal u(x1,x2,…,xn) are investigated. The first one is the n-D complex analytic signal with a single-orthant spectrum defined by Hahn in 1992 as the extension of the 1-D Gabor’s analytic signal. It is compared with two hypercomplex approaches: the known n-D Clifford analytic signal and the Cayley-Dickson analytic signal defined by the Author in 2009. The signal-domain and frequency-domain definitions of these signals are presented and compared in 2-D and 3-D. Some new relations between the spectra in 2-D and 3-D hypercomplex domains are presented. The paper is illustrated with the example of a 2-D separable Cauchy pulse
"Rewiring" Filterbanks for Local Fourier Analysis: Theory and Practice
This article describes a series of new results outlining equivalences between
certain "rewirings" of filterbank system block diagrams, and the corresponding
actions of convolution, modulation, and downsampling operators. This gives rise
to a general framework of reverse-order and convolution subband structures in
filterbank transforms, which we show to be well suited to the analysis of
filterbank coefficients arising from subsampled or multiplexed signals. These
results thus provide a means to understand time-localized aliasing and
modulation properties of such signals and their subband
representations--notions that are notably absent from the global viewpoint
afforded by Fourier analysis. The utility of filterbank rewirings is
demonstrated by the closed-form analysis of signals subject to degradations
such as missing data, spatially or temporally multiplexed data acquisition, or
signal-dependent noise, such as are often encountered in practical signal
processing applications
Bayesian photon counting with electron-multiplying charge coupled devices (EMCCDs)
The EMCCD is a CCD type that delivers fast readout and negligible detector
noise, making it an ideal detector for high frame rate applications. Because of
the very low detector noise, this detector can potentially count single
photons. Considering that an EMCCD has a limited dynamical range and negligible
detector noise, one would typically apply an EMCCD in such a way that multiple
images of the same object are available, for instance, in so called lucky
imaging. The problem of counting photons can then conveniently be viewed as
statistical inference of flux or photon rates, based on a stack of images. A
simple probabilistic model for the output of an EMCCD is developed. Based on
this model and the prior knowledge that photons are Poisson distributed, we
derive two methods for estimating the most probable flux per pixel, one based
on thresholding, and another based on full Bayesian inference. We find that it
is indeed possible to derive such expressions, and tests of these methods show
that estimating fluxes with only shot noise is possible, up to fluxes of about
one photon per pixel per readout.Comment: Fixed a few typos compared to the published versio
Super-Resolution in Phase Space
This work considers the problem of super-resolution. The goal is to resolve a
Dirac distribution from knowledge of its discrete, low-pass, Fourier
measurements. Classically, such problems have been dealt with parameter
estimation methods. Recently, it has been shown that convex-optimization based
formulations facilitate a continuous time solution to the super-resolution
problem. Here we treat super-resolution from low-pass measurements in Phase
Space. The Phase Space transformation parametrically generalizes a number of
well known unitary mappings such as the Fractional Fourier, Fresnel, Laplace
and Fourier transforms. Consequently, our work provides a general super-
resolution strategy which is backward compatible with the usual Fourier domain
result. We consider low-pass measurements of Dirac distributions in Phase Space
and show that the super-resolution problem can be cast as Total Variation
minimization. Remarkably, even though are setting is quite general, the bounds
on the minimum separation distance of Dirac distributions is comparable to
existing methods.Comment: 10 Pages, short paper in part accepted to ICASSP 201
Linear stochastic systems: a white noise approach
Using the white noise setting, in particular the Wick product, the Hermite
transform, and the Kondratiev space, we present a new approach to study linear
stochastic systems, where randomness is also included in the transfer function.
We prove BIBO type stability theorems for these systems, both in the discrete
and continuous time cases. We also consider the case of dissipative systems for
both discrete and continuous time systems. We further study -
stability in the discrete time case, and -
stability in the continuous time case
- …