490 research outputs found

    Self-similar prior and wavelet bases for hidden incompressible turbulent motion

    Get PDF
    This work is concerned with the ill-posed inverse problem of estimating turbulent flows from the observation of an image sequence. From a Bayesian perspective, a divergence-free isotropic fractional Brownian motion (fBm) is chosen as a prior model for instantaneous turbulent velocity fields. This self-similar prior characterizes accurately second-order statistics of velocity fields in incompressible isotropic turbulence. Nevertheless, the associated maximum a posteriori involves a fractional Laplacian operator which is delicate to implement in practice. To deal with this issue, we propose to decompose the divergent-free fBm on well-chosen wavelet bases. As a first alternative, we propose to design wavelets as whitening filters. We show that these filters are fractional Laplacian wavelets composed with the Leray projector. As a second alternative, we use a divergence-free wavelet basis, which takes implicitly into account the incompressibility constraint arising from physics. Although the latter decomposition involves correlated wavelet coefficients, we are able to handle this dependence in practice. Based on these two wavelet decompositions, we finally provide effective and efficient algorithms to approach the maximum a posteriori. An intensive numerical evaluation proves the relevance of the proposed wavelet-based self-similar priors.Comment: SIAM Journal on Imaging Sciences, 201

    Multidimensional Wavelets and Computer Vision

    Get PDF
    This report deals with the construction and the mathematical analysis of multidimensional nonseparable wavelets and their efficient application in computer vision. In the first part, the fundamental principles and ideas of multidimensional wavelet filter design such as the question for the existence of good scaling matrices and sensible design criteria are presented and extended in various directions. Afterwards, the analytical properties of these wavelets are investigated in some detail. It will turn out that they are especially well-suited to represent (discretized) data as well as large classes of operators in a sparse form - a property that directly yields efficient numerical algorithms. The final part of this work is dedicated to the application of the developed methods to the typical computer vision problems of nonlinear image regularization and the computation of optical flow in image sequences. It is demonstrated how the wavelet framework leads to stable and reliable results for these problems of generally ill-posed nature. Furthermore, all the algorithms are of order O(n) leading to fast processing

    Geometric deep learning: going beyond Euclidean data

    Get PDF
    Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them. Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field

    Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressedā€”either explicitly or implicitlyā€”to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m Ɨ n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data

    A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity

    Full text link
    The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share an hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multi-orientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to non-Euclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping "pictures". We hope that this paper will contribute to the appreciation and apprehension of a stream of current research directions in image understanding.Comment: 65 pages, 33 figures, 303 reference

    Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis

    Cardiac motion estimation using covariant derivatives and Helmholtz decomposition

    Get PDF
    The investigation and quantification of cardiac movement is important for assessment of cardiac abnormalities and treatment effectiveness. Therefore we consider new aperture problem-free methods to track cardiac motion from 2-dimensional MR tagged images and corresponding sine-phase images. Tracking is achieved by following the movement of scale-space maxima, yielding a sparse set of linear features of the unknown optic flow vector field. Interpolation/reconstruction of the velocity field is then carried out by minimizing an energy functional which is a Sobolev-norm expressed in covariant derivatives (rather than standard derivatives). These covariant derivatives are used to express prior knowledge about the velocity field in the variational framework employed. They are defined on a fiber bundle where sections coincide with vector fields. Furthermore, the optic flow vector field is decomposed in a divergence free and a rotation free part, using our multi-scale Helmholtz decomposition algorithm that combines diffusion and Helmholtz decomposition in a single non-singular analytic kernel operator. Finally, we combine this multi-scale Helmholtz decomposition with vector field reconstruction (based on covariant derivatives) in a single algorithm and present some experiments of cardiac motion estimation. Further experiments on phantom data with ground truth show that both the inclusion of covariant derivatives and the inclusion of the multi-scale Helmholtz decomposition improves the optic flow reconstruction

    AM-FM methods for image and video processing

    Get PDF
    This dissertation is focused on the development of robust and efficient Amplitude-Modulation Frequency-Modulation (AM-FM) demodulation methods for image and video processing (there is currently a patent pending that covers the AM-FM methods and applications described in this dissertation). The motivation for this research lies in the wide number of image and video processing applications that can significantly benefit from this research. A number of potential applications are developed in the dissertation. First, a new, robust and efficient formulation for the instantaneous frequency (IF) estimation: a variable spacing, local quadratic phase method (VS-LQP) is presented. VS-LQP produces much more accurate results than current AM-FM methods. At significant noise levels (SNR \u3c 30dB), for single component images, the VS-LQP method produces better IF estimation results than methods using a multi-scale filterbank. At low noise levels (SNR \u3e 50dB), VS-LQP performs better when used in combination with a multi-scale filterbank. In all cases, VS-LQP outperforms the Quasi-Eigen Approximation algorithm by significant amounts (up to 20dB). New least squares reconstructions using AM-FM components from the input signal (image or video) are also presented. Three different reconstruction approaches are developed: (i) using AM-FM harmonics, (ii) using AM-FM components extracted from different scales and (iii) using AM-FM harmonics with the output of a low-pass filter. The image reconstruction methods provide perceptually lossless results with image quality index values bigger than 0.7 on average. The video reconstructions produced image quality index values, frame by frame, up to more than 0.7 using AM-FM components extracted from different scales. An application of the AM-FM method to retinal image analysis is also shown. This approach uses the instantaneous frequency magnitude and the instantaneous amplitude (IA) information to provide image features. The new AM-FM approach produced ROC area of 0.984 in classifying Risk 0 versus Risk 1, 0.95 in classifying Risk 0 versus Risk 2, 0.973 in classifying Risk 0 versus Risk 3 and 0.95 in classifying Risk 0 versus all images with any sign of Diabetic Retinopathy. An extension of the 2D AM-FM demodulation methods to three dimensions is also presented. New AM-FM methods for motion estimation are developed. The new motion estimation method provides three motion estimation equations per channel filter (AM, IF motion equations and a continuity equation). Applications of the method in motion tracking, trajectory estimation and for continuous-scale video searching are demonstrated. For each application, we discuss the advantages of the AM-FM methods over current approaches
    • ā€¦
    corecore