24 research outputs found
Sampling curves with finite rate of innovation
In this paper, we extend the theory of sampling signals with finite rate of innovation (FRI) to a specific class of two-dimensional curves, which are defined implicitly as the zeros of a mask function. Here the mask function has a parametric representation as a weighted summation of a finite number of complex exponentials, and therefore, has finite rate of innovation . An associated edge image, which is discontinuous on the predefined parametric curve, is proved to satisfy a set of linear annihilation equations. We show that it is possible to reconstruct the parameters of the curve (i.e., to detect the exact edge positions in the continuous domain) based on the annihilation equations. Robust reconstruction algorithms are also developed to cope with scenarios with model mismatch. Moreover, the annihilation equations that characterize the curve are linear constraints that can be easily exploited in optimization problems for further image processing (e.g., image up-sampling). We demonstrate one potential application of the annihilation algorithm with examples in edge-preserving interpolation. Experimental results with both synthetic curves as well as edges of natural images clearly show the effectiveness of the annihilation constraint in preserving sharp edges, and improving SNRs
Sampling and Reconstruction of Shapes with Algebraic Boundaries
We present a sampling theory for a class of binary images with finite rate of
innovation (FRI). Every image in our model is the restriction of
\mathds{1}_{\{p\leq0\}} to the image plane, where \mathds{1} denotes the
indicator function and is some real bivariate polynomial. This particularly
means that the boundaries in the image form a subset of an algebraic curve with
the implicit polynomial . We show that the image parameters --i.e., the
polynomial coefficients-- satisfy a set of linear annihilation equations with
the coefficients being the image moments. The inherent sensitivity of the
moments to noise makes the reconstruction process numerically unstable and
narrows the choice of the sampling kernels to polynomial reproducing kernels.
As a remedy to these problems, we replace conventional moments with more stable
\emph{generalized moments} that are adjusted to the given sampling kernel. The
benefits are threefold: (1) it relaxes the requirements on the sampling
kernels, (2) produces annihilation equations that are robust at numerical
precision, and (3) extends the results to images with unbounded boundaries. We
further reduce the sensitivity of the reconstruction process to noise by taking
into account the sign of the polynomial at certain points, and sequentially
enforcing measurement consistency. We consider various numerical experiments to
demonstrate the performance of our algorithm in reconstructing binary images,
including low to moderate noise levels and a range of realistic sampling
kernels.Comment: 12 pages, 14 figure
Shapes From Pixels
Continuous-domain visual signals are usually captured as discrete (digital)
images. This operation is not invertible in general, in the sense that the
continuous-domain signal cannot be exactly reconstructed based on the discrete
image, unless it satisfies certain constraints (\emph{e.g.}, bandlimitedness).
In this paper, we study the problem of recovering shape images with smooth
boundaries from a set of samples. Thus, the reconstructed image is constrained
to regenerate the same samples (consistency), as well as forming a shape
(bilevel) image. We initially formulate the reconstruction technique by
minimizing the shape perimeter over the set of consistent binary shapes. Next,
we relax the non-convex shape constraint to transform the problem into
minimizing the total variation over consistent non-negative-valued images. We
also introduce a requirement (called reducibility) that guarantees equivalence
between the two problems. We illustrate that the reducibility property
effectively sets a requirement on the minimum sampling density. One can draw
analogy between the reducibility property and the so-called restricted isometry
property (RIP) in compressed sensing which establishes the equivalence of the
minimization with the relaxed minimization. We also evaluate
the performance of the relaxed alternative in various numerical experiments.Comment: 13 pages, 14 figure
Sampling and Super-resolution of Sparse Signals Beyond the Fourier Domain
Recovering a sparse signal from its low-pass projections in the Fourier
domain is a problem of broad interest in science and engineering and is
commonly referred to as super-resolution. In many cases, however, Fourier
domain may not be the natural choice. For example, in holography, low-pass
projections of sparse signals are obtained in the Fresnel domain. Similarly,
time-varying system identification relies on low-pass projections on the space
of linear frequency modulated signals. In this paper, we study the recovery of
sparse signals from low-pass projections in the Special Affine Fourier
Transform domain (SAFT). The SAFT parametrically generalizes a number of well
known unitary transformations that are used in signal processing and optics. In
analogy to the Shannon's sampling framework, we specify sampling theorems for
recovery of sparse signals considering three specific cases: (1) sampling with
arbitrary, bandlimited kernels, (2) sampling with smooth, time-limited kernels
and, (3) recovery from Gabor transform measurements linked with the SAFT
domain. Our work offers a unifying perspective on the sparse sampling problem
which is compatible with the Fourier, Fresnel and Fractional Fourier domain
based results. In deriving our results, we introduce the SAFT series (analogous
to the Fourier series) and the short time SAFT, and study convolution theorems
that establish a convolution--multiplication property in the SAFT domain.Comment: 42 pages, 3 figures, manuscript under revie