2,198 research outputs found

    Self-similar prior and wavelet bases for hidden incompressible turbulent motion

    Get PDF
    This work is concerned with the ill-posed inverse problem of estimating turbulent flows from the observation of an image sequence. From a Bayesian perspective, a divergence-free isotropic fractional Brownian motion (fBm) is chosen as a prior model for instantaneous turbulent velocity fields. This self-similar prior characterizes accurately second-order statistics of velocity fields in incompressible isotropic turbulence. Nevertheless, the associated maximum a posteriori involves a fractional Laplacian operator which is delicate to implement in practice. To deal with this issue, we propose to decompose the divergent-free fBm on well-chosen wavelet bases. As a first alternative, we propose to design wavelets as whitening filters. We show that these filters are fractional Laplacian wavelets composed with the Leray projector. As a second alternative, we use a divergence-free wavelet basis, which takes implicitly into account the incompressibility constraint arising from physics. Although the latter decomposition involves correlated wavelet coefficients, we are able to handle this dependence in practice. Based on these two wavelet decompositions, we finally provide effective and efficient algorithms to approach the maximum a posteriori. An intensive numerical evaluation proves the relevance of the proposed wavelet-based self-similar priors.Comment: SIAM Journal on Imaging Sciences, 201

    Spectrum Analysis of Speech Recognition via Discrete Tchebichef Transform

    Get PDF
    Speech recognition is still a growing field. It carries strong potential in the near future as computing power grows. Spectrum analysis is an elementary operation in speech recognition. Fast Fourier Transform (FFT) is the traditional technique to analyze frequency spectrum of the signal in speech recognition. Speech recognition operation requires heavy computation due to large samples per window. In addition, FFT consists of complex field computing. This paper proposes an approach based on discrete orthonormal Tchebichef polynomials to analyze a vowel and a consonant in spectral frequency for speech recognition. The Discrete Tchebichef Transform (DTT) is used instead of popular FFT. The preliminary experimental results show that DTT has the potential to be a simpler and faster transformation for speech recognition

    Construction of Hilbert Transform Pairs of Wavelet Bases and Gabor-like Transforms

    Get PDF
    We propose a novel method for constructing Hilbert transform (HT) pairs of wavelet bases based on a fundamental approximation-theoretic characterization of scaling functions--the B-spline factorization theorem. In particular, starting from well-localized scaling functions, we construct HT pairs of biorthogonal wavelet bases of L^2(R) by relating the corresponding wavelet filters via a discrete form of the continuous HT filter. As a concrete application of this methodology, we identify HT pairs of spline wavelets of a specific flavor, which are then combined to realize a family of complex wavelets that resemble the optimally-localized Gabor function for sufficiently large orders. Analytic wavelets, derived from the complexification of HT wavelet pairs, exhibit a one-sided spectrum. Based on the tensor-product of such analytic wavelets, and, in effect, by appropriately combining four separable biorthogonal wavelet bases of L^2(R^2), we then discuss a methodology for constructing 2D directional-selective complex wavelets. In particular, analogous to the HT correspondence between the components of the 1D counterpart, we relate the real and imaginary components of these complex wavelets using a multi-dimensional extension of the HT--the directional HT. Next, we construct a family of complex spline wavelets that resemble the directional Gabor functions proposed by Daugman. Finally, we present an efficient FFT-based filterbank algorithm for implementing the associated complex wavelet transform.Comment: 36 pages, 8 figure

    Application of invariant moments for crowd analysis

    No full text
    The advancement in technology such as the use of CCTV has improved the effects of monitoring crowds. However, the drawback of using CCTV is that the observer might miss some information because monitoring crowds through CCTV system is very laborious and cannot be performed for all the cameras simultaneously. Hence, integrating the image processing techniques into the CCTV surveillance system could give numerous key advantages, and is in fact the only way to deploy effective and affordable intelligent video security systems. Meanwhile, in monitoring crowds, this approach may provide an automated crowd analysis which may also help to improve the prevention of incidents and accelerate action triggering. One of the image processing techniques which might be appropriate is moment invariants. The moments for an individual object have been used widely and successfully in lots of application such as pattern recognition, object identification or image reconstruction. However, until now, moments have not been widely used for a group of objects, such as crowds. A new method Translation Invariant Orthonormal Chebyshev Moments has been proposed. It has been used to estimate crowd density, and compared with two other methods, the Grey Level Dependency Matrix and Minkowski Fractal Dimension. The extracted features are classified into a range of density by using a Self Organizing Map. A comparison of the classification results is done to determine which method gives the best performance for measuring crowd density by vision. The Grey Level Dependency Matrix gives slightly better performance than the Translation Invariant Orthonormal Chebyshev Moments. However, the latter requires less computational resources

    Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

    Get PDF
    Suppose we are given a vector ff in RN\R^N. How many linear measurements do we need to make about ff to be able to recover ff to within precision Ï”\epsilon in the Euclidean (ℓ2\ell_2) metric? Or more exactly, suppose we are interested in a class F{\cal F} of such objects--discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy Ï”\epsilon? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f∈Ff \in {\cal F} decay like a power-law (or if the coefficient sequence of ff in a fixed basis decays like a power-law), then it is possible to reconstruct ff to within very high accuracy from a small number of random measurements.Comment: 39 pages; no figures; to appear. Bernoulli ensemble proof has been corrected; other expository and bibliographical changes made, incorporating referee's suggestion

    Wavelet transforms and their applications to MHD and plasma turbulence: a review

    Full text link
    Wavelet analysis and compression tools are reviewed and different applications to study MHD and plasma turbulence are presented. We introduce the continuous and the orthogonal wavelet transform and detail several statistical diagnostics based on the wavelet coefficients. We then show how to extract coherent structures out of fully developed turbulent flows using wavelet-based denoising. Finally some multiscale numerical simulation schemes using wavelets are described. Several examples for analyzing, compressing and computing one, two and three dimensional turbulent MHD or plasma flows are presented.Comment: Journal of Plasma Physics, 201
    • 

    corecore