339 research outputs found
Polarized wavelets and curvelets on the sphere
The statistics of the temperature anisotropies in the primordial cosmic
microwave background radiation field provide a wealth of information for
cosmology and for estimating cosmological parameters. An even more acute
inference should stem from the study of maps of the polarization state of the
CMB radiation. Measuring the extremely weak CMB polarization signal requires
very sensitive instruments. The full-sky maps of both temperature and
polarization anisotropies of the CMB to be delivered by the upcoming Planck
Surveyor satellite experiment are hence being awaited with excitement.
Multiscale methods, such as isotropic wavelets, steerable wavelets, or
curvelets, have been proposed in the past to analyze the CMB temperature map.
In this paper, we contribute to enlarging the set of available transforms for
polarized data on the sphere. We describe a set of new multiscale
decompositions for polarized data on the sphere, including decimated and
undecimated Q-U or E-B wavelet transforms and Q-U or E-B curvelets. The
proposed transforms are invertible and so allow for applications in data
restoration and denoising.Comment: Accepted. Full paper will figures available at
http://jstarck.free.fr/aa08_pola.pd
Incidence Geometries and the Pass Complexity of Semi-Streaming Set Cover
Set cover, over a universe of size , may be modelled as a data-streaming
problem, where the sets that comprise the instance are to be read one by
one. A semi-streaming algorithm is allowed only space to process this stream. For each , we give a very
simple deterministic algorithm that makes passes over the input stream and
returns an appropriately certified -approximation to the
optimum set cover. More importantly, we proceed to show that this approximation
factor is essentially tight, by showing that a factor better than
is unachievable for a -pass semi-streaming
algorithm, even allowing randomisation. In particular, this implies that
achieving a -approximation requires
passes, which is tight up to the factor. These results extend to a
relaxation of the set cover problem where we are allowed to leave an
fraction of the universe uncovered: the tight bounds on the best
approximation factor achievable in passes turn out to be
. Our lower bounds are based
on a construction of a family of high-rank incidence geometries, which may be
thought of as vast generalisations of affine planes. This construction, based
on algebraic techniques, appears flexible enough to find other applications and
is therefore interesting in its own right.Comment: 20 page
Wavelet-based denoising for 3D OCT images
Optical coherence tomography produces high resolution medical images based on spatial and temporal coherence of the optical waves backscattered from the scanned tissue. However, the same coherence introduces speckle noise as well; this degrades the quality of acquired images.
In this paper we propose a technique for noise reduction of 3D OCT images, where the 3D volume is considered as a sequence of 2D images, i.e., 2D slices in depth-lateral projection plane. In the proposed method we first perform recursive temporal filtering through the estimated motion trajectory between the 2D slices using noise-robust motion estimation/compensation scheme previously proposed for video denoising. The temporal filtering scheme reduces the noise level and adapts the motion compensation on it. Subsequently, we apply a spatial filter for speckle reduction in order to remove the remainder of noise in the 2D slices. In this scheme the spatial (2D) speckle-nature of noise in OCT is modeled and used for spatially adaptive denoising. Both the temporal and the spatial filter are wavelet-based techniques, where for the temporal filter two resolution scales are used and for the spatial one four resolution scales.
The evaluation of the proposed denoising approach is done on demodulated 3D OCT images on different sources and of different resolution. For optimizing the parameters for best denoising performance fantom OCT images were used. The denoising performance of the proposed method was measured in terms of SNR, edge sharpness preservation and contrast-to-noise ratio. A comparison was made to the state-of-the-art methods for noise reduction in 2D OCT images, where the proposed approach showed to be advantageous in terms of both objective and subjective quality measures
Submodular Maximization Meets Streaming: Matchings, Matroids, and More
We study the problem of finding a maximum matching in a graph given by an
input stream listing its edges in some arbitrary order, where the quantity to
be maximized is given by a monotone submodular function on subsets of edges.
This problem, which we call maximum submodular-function matching (MSM), is a
natural generalization of maximum weight matching (MWM), which is in turn a
generalization of maximum cardinality matching (MCM). We give two incomparable
algorithms for this problem with space usage falling in the semi-streaming
range---they store only edges, using working memory---that
achieve approximation ratios of in a single pass and in
passes respectively. The operations of these algorithms
mimic those of Zelke's and McGregor's respective algorithms for MWM; the
novelty lies in the analysis for the MSM setting. In fact we identify a general
framework for MWM algorithms that allows this kind of adaptation to the broader
setting of MSM.
In the sequel, we give generalizations of these results where the
maximization is over "independent sets" in a very general sense. This
generalization captures hypermatchings in hypergraphs as well as independence
in the intersection of multiple matroids.Comment: 18 page
Principled Design and Implementation of Steerable Detectors
We provide a complete pipeline for the detection of patterns of interest in
an image. In our approach, the patterns are assumed to be adequately modeled by
a known template, and are located at unknown position and orientation. We
propose a continuous-domain additive image model, where the analyzed image is
the sum of the template and an isotropic background signal with self-similar
isotropic power-spectrum. The method is able to learn an optimal steerable
filter fulfilling the SNR criterion based on one single template and background
pair, that therefore strongly responds to the template, while optimally
decoupling from the background model. The proposed filter then allows for a
fast detection process, with the unknown orientation estimation through the use
of steerability properties. In practice, the implementation requires to
discretize the continuous-domain formulation on polar grids, which is performed
using radial B-splines. We demonstrate the practical usefulness of our method
on a variety of template approximation and pattern detection experiments
Lorentzian Iterative Hard Thresholding: Robust Compressed Sensing with Prior Information
Commonly employed reconstruction algorithms in compressed sensing (CS) use
the norm as the metric for the residual error. However, it is well-known
that least squares (LS) based estimators are highly sensitive to outliers
present in the measurement vector leading to a poor performance when the noise
no longer follows the Gaussian assumption but, instead, is better characterized
by heavier-than-Gaussian tailed distributions. In this paper, we propose a
robust iterative hard Thresholding (IHT) algorithm for reconstructing sparse
signals in the presence of impulsive noise. To address this problem, we use a
Lorentzian cost function instead of the cost function employed by the
traditional IHT algorithm. We also modify the algorithm to incorporate prior
signal information in the recovery process. Specifically, we study the case of
CS with partially known support. The proposed algorithm is a fast method with
computational load comparable to the LS based IHT, whilst having the advantage
of robustness against heavy-tailed impulsive noise. Sufficient conditions for
stability are studied and a reconstruction error bound is derived. We also
derive sufficient conditions for stable sparse signal recovery with partially
known support. Theoretical analysis shows that including prior support
information relaxes the conditions for successful reconstruction. Simulation
results demonstrate that the Lorentzian-based IHT algorithm significantly
outperform commonly employed sparse reconstruction techniques in impulsive
environments, while providing comparable performance in less demanding,
light-tailed environments. Numerical results also demonstrate that the
partially known support inclusion improves the performance of the proposed
algorithm, thereby requiring fewer samples to yield an approximate
reconstruction.Comment: 28 pages, 9 figures, accepted in IEEE Transactions on Signal
Processin
- …