11,970 research outputs found
Self-Similar Anisotropic Texture Analysis: the Hyperbolic Wavelet Transform Contribution
Textures in images can often be well modeled using self-similar processes
while they may at the same time display anisotropy. The present contribution
thus aims at studying jointly selfsimilarity and anisotropy by focusing on a
specific classical class of Gaussian anisotropic selfsimilar processes. It will
first be shown that accurate joint estimates of the anisotropy and
selfsimilarity parameters are performed by replacing the standard 2D-discrete
wavelet transform by the hyperbolic wavelet transform, which permits the use of
different dilation factors along the horizontal and vertical axis. Defining
anisotropy requires a reference direction that needs not a priori match the
horizontal and vertical axes according to which the images are digitized, this
discrepancy defines a rotation angle. Second, we show that this rotation angle
can be jointly estimated. Third, a non parametric bootstrap based procedure is
described, that provides confidence interval in addition to the estimates
themselves and enables to construct an isotropy test procedure, that can be
applied to a single texture image. Fourth, the robustness and versatility of
the proposed analysis is illustrated by being applied to a large variety of
different isotropic and anisotropic self-similar fields. As an illustration, we
show that a true anisotropy built-in self-similarity can be disentangled from
an isotropic self-similarity to which an anisotropic trend has been
superimposed
Self-similar prior and wavelet bases for hidden incompressible turbulent motion
This work is concerned with the ill-posed inverse problem of estimating
turbulent flows from the observation of an image sequence. From a Bayesian
perspective, a divergence-free isotropic fractional Brownian motion (fBm) is
chosen as a prior model for instantaneous turbulent velocity fields. This
self-similar prior characterizes accurately second-order statistics of velocity
fields in incompressible isotropic turbulence. Nevertheless, the associated
maximum a posteriori involves a fractional Laplacian operator which is delicate
to implement in practice. To deal with this issue, we propose to decompose the
divergent-free fBm on well-chosen wavelet bases. As a first alternative, we
propose to design wavelets as whitening filters. We show that these filters are
fractional Laplacian wavelets composed with the Leray projector. As a second
alternative, we use a divergence-free wavelet basis, which takes implicitly
into account the incompressibility constraint arising from physics. Although
the latter decomposition involves correlated wavelet coefficients, we are able
to handle this dependence in practice. Based on these two wavelet
decompositions, we finally provide effective and efficient algorithms to
approach the maximum a posteriori. An intensive numerical evaluation proves the
relevance of the proposed wavelet-based self-similar priors.Comment: SIAM Journal on Imaging Sciences, 201
Wavelet-Based Entropy Measures to Characterize Two-Dimensional Fractional Brownian Fields
The aim of this work was to extend the results of Perez et al. (Physica A (2006), 365 (2), 282–288) to the two-dimensional (2D) fractional Brownian field. In particular, we defined Shannon entropy using the wavelet spectrum from which the Hurst exponent is estimated by the regression of the logarithm of the square coefficients over the levels of resolutions. Using the same methodology. we also defined two other entropies in 2D: Tsallis and the Rényi entropies. A simulation study was performed for showing the ability of the method to characterize 2D (in this case, α = 2) self-similar processes
A Multiresolution Census Algorithm for Calculating Vortex Statistics in Turbulent Flows
The fundamental equations that model turbulent flow do not provide much
insight into the size and shape of observed turbulent structures. We
investigate the efficient and accurate representation of structures in
two-dimensional turbulence by applying statistical models directly to the
simulated vorticity field. Rather than extract the coherent portion of the
image from the background variation, as in the classical signal-plus-noise
model, we present a model for individual vortices using the non-decimated
discrete wavelet transform. A template image, supplied by the user, provides
the features to be extracted from the vorticity field. By transforming the
vortex template into the wavelet domain, specific characteristics present in
the template, such as size and symmetry, are broken down into components
associated with spatial frequencies. Multivariate multiple linear regression is
used to fit the vortex template to the vorticity field in the wavelet domain.
Since all levels of the template decomposition may be used to model each level
in the field decomposition, the resulting model need not be identical to the
template. Application to a vortex census algorithm that records quantities of
interest (such as size, peak amplitude, circulation, etc.) as the vorticity
field evolves is given. The multiresolution census algorithm extracts coherent
structures of all shapes and sizes in simulated vorticity fields and is able to
reproduce known physical scaling laws when processing a set of voriticity
fields that evolve over time
Methods for characterising microphysical processes in plasmas
Advanced spectral and statistical data analysis techniques have greatly
contributed to shaping our understanding of microphysical processes in plasmas.
We review some of the main techniques that allow for characterising fluctuation
phenomena in geospace and in laboratory plasma observations. Special emphasis
is given to the commonalities between different disciplines, which have
witnessed the development of similar tools, often with differing terminologies.
The review is phrased in terms of few important concepts: self-similarity,
deviation from self-similarity (i.e. intermittency and coherent structures),
wave-turbulence, and anomalous transport.Comment: Space Science Reviews (2013), in pres
Searching for non-Gaussianity in the VSA data
We have tested Very Small Array (VSA) observations of three regions of sky
for the presence of non-Gaussianity, using high-order cumulants, Minkowski
functionals, a wavelet-based test and a Bayesian joint power
spectrum/non-Gaussianity analysis. We find the data from two regions to be
consistent with Gaussianity. In the third region, we obtain a 96.7% detection
of non-Gaussianity using the wavelet test. We perform simulations to
characterise the tests, and conclude that this is consistent with expected
residual point source contamination. There is therefore no evidence that this
detection is of cosmological origin. Our simulations show that the tests would
be sensitive to any residual point sources above the data's source subtraction
level of 20 mJy. The tests are also sensitive to cosmic string networks at an
rms fluctuation level of (i.e. equivalent to the best-fit observed
value). They are not sensitive to string-induced fluctuations if an equal rms
of Gaussian CDM fluctuations is added, thereby reducing the fluctuations due to
the strings network to rms . We especially highlight the usefulness
of non-Gaussianity testing in eliminating systematic effects from our data.Comment: Minor corrections; accepted for publication to MNRA
Multi-scale Discriminant Saliency with Wavelet-based Hidden Markov Tree Modelling
The bottom-up saliency, an early stage of humans' visual attention, can be
considered as a binary classification problem between centre and surround
classes. Discriminant power of features for the classification is measured as
mutual information between distributions of image features and corresponding
classes . As the estimated discrepancy very much depends on considered scale
level, multi-scale structure and discriminant power are integrated by employing
discrete wavelet features and Hidden Markov Tree (HMT). With wavelet
coefficients and Hidden Markov Tree parameters, quad-tree like label structures
are constructed and utilized in maximum a posterior probability (MAP) of hidden
class variables at corresponding dyadic sub-squares. Then, a saliency value for
each square block at each scale level is computed with discriminant power
principle. Finally, across multiple scales is integrated the final saliency map
by an information maximization rule. Both standard quantitative tools such as
NSS, LCC, AUC and qualitative assessments are used for evaluating the proposed
multi-scale discriminant saliency (MDIS) method against the well-know
information based approach AIM on its released image collection with
eye-tracking data. Simulation results are presented and analysed to verify the
validity of MDIS as well as point out its limitation for further research
direction.Comment: arXiv admin note: substantial text overlap with arXiv:1301.396
- …