12,358 research outputs found

    A Bayesian fusion model for space-time reconstruction of finely resolved velocities in turbulent flows from low resolution measurements

    Full text link
    The study of turbulent flows calls for measurements with high resolution both in space and in time. We propose a new approach to reconstruct High-Temporal-High-Spatial resolution velocity fields by combining two sources of information that are well-resolved either in space or in time, the Low-Temporal-High-Spatial (LTHS) and the High-Temporal-Low-Spatial (HTLS) resolution measurements. In the framework of co-conception between sensing and data post-processing, this work extensively investigates a Bayesian reconstruction approach using a simulated database. A Bayesian fusion model is developed to solve the inverse problem of data reconstruction. The model uses a Maximum A Posteriori estimate, which yields the most probable field knowing the measurements. The DNS of a wall-bounded turbulent flow at moderate Reynolds number is used to validate and assess the performances of the present approach. Low resolution measurements are subsampled in time and space from the fully resolved data. Reconstructed velocities are compared to the reference DNS to estimate the reconstruction errors. The model is compared to other conventional methods such as Linear Stochastic Estimation and cubic spline interpolation. Results show the superior accuracy of the proposed method in all configurations. Further investigations of model performances on various range of scales demonstrate its robustness. Numerical experiments also permit to estimate the expected maximum information level corresponding to limitations of experimental instruments.Comment: 15 pages, 6 figure

    Frequency Analysis of Gradient Estimators in Volume Rendering

    Get PDF
    Gradient information is used in volume rendering to classify and color samples along a ray. In this paper, we present an analysis of the theoretically ideal gradient estimator and compare it to some commonly used gradient estimators. A new method is presented to calculate the gradient at arbitrary sample positions, using the derivative of the interpolation filter as the basis for the new gradient filter. As an example, we will discuss the use of the derivative of the cubic spline. Comparisons with several other methods are demonstrated. Computational efficiency can be realized since parts of the interpolation computation can be leveraged in the gradient estimatio

    Optimising Spatial and Tonal Data for PDE-based Inpainting

    Full text link
    Some recent methods for lossy signal and image compression store only a few selected pixels and fill in the missing structures by inpainting with a partial differential equation (PDE). Suitable operators include the Laplacian, the biharmonic operator, and edge-enhancing anisotropic diffusion (EED). The quality of such approaches depends substantially on the selection of the data that is kept. Optimising this data in the domain and codomain gives rise to challenging mathematical problems that shall be addressed in our work. In the 1D case, we prove results that provide insights into the difficulty of this problem, and we give evidence that a splitting into spatial and tonal (i.e. function value) optimisation does hardly deteriorate the results. In the 2D setting, we present generic algorithms that achieve a high reconstruction quality even if the specified data is very sparse. To optimise the spatial data, we use a probabilistic sparsification, followed by a nonlocal pixel exchange that avoids getting trapped in bad local optima. After this spatial optimisation we perform a tonal optimisation that modifies the function values in order to reduce the global reconstruction error. For homogeneous diffusion inpainting, this comes down to a least squares problem for which we prove that it has a unique solution. We demonstrate that it can be found efficiently with a gradient descent approach that is accelerated with fast explicit diffusion (FED) cycles. Our framework allows to specify the desired density of the inpainting mask a priori. Moreover, is more generic than other data optimisation approaches for the sparse inpainting problem, since it can also be extended to nonlinear inpainting operators such as EED. This is exploited to achieve reconstructions with state-of-the-art quality. We also give an extensive literature survey on PDE-based image compression methods

    Single Frame Image super Resolution using Learned Directionlets

    Full text link
    In this paper, a new directionally adaptive, learning based, single image super resolution method using multiple direction wavelet transform, called Directionlets is presented. This method uses directionlets to effectively capture directional features and to extract edge information along different directions of a set of available high resolution images .This information is used as the training set for super resolving a low resolution input image and the Directionlet coefficients at finer scales of its high-resolution image are learned locally from this training set and the inverse Directionlet transform recovers the super-resolved high resolution image. The simulation results showed that the proposed approach outperforms standard interpolation techniques like Cubic spline interpolation as well as standard Wavelet-based learning, both visually and in terms of the mean squared error (mse) values. This method gives good result with aliased images also.Comment: 14 pages,6 figure

    Super-resolution of 3D Magnetic Resonance Images by Random Shifting and Convolutional Neural Networks

    Get PDF
    Enhancing resolution is a permanent goal in magnetic resonance (MR) imaging, in order to keep improving diagnostic capability and registration methods. Super-resolution (SR) techniques are applied at the postprocessing stage, and their use and development have progressively increased during the last years. In particular, example-based methods have been mostly proposed in recent state-of-the-art works. In this paper, a combination of a deep-learning SR system and a random shifting technique to improve the quality of MR images is proposed, implemented and tested. The model was compared to four competitors: cubic spline interpolation, non-local means upsampling, low-rank total variation and a three-dimensional convolutional neural network trained with patches of HR brain images (SRCNN3D). The newly proposed method showed better results in Peak Signal-to-Noise Ratio, Structural Similarity index, and Bhattacharyya coefficient. Computation times were at the same level as those of these up-to-date methods. When applied to downsampled MR structural T1 images, the new method also yielded better qualitative results, both in the restored images and in the images of residuals.Universidad de MĂĄlaga. Campus de Excelencia Internacional AndalucĂ­a Tech
    • 

    corecore