1,845 research outputs found

    Most Likely Separation of Intensity and Warping Effects in Image Registration

    Full text link
    This paper introduces a class of mixed-effects models for joint modeling of spatially correlated intensity variation and warping variation in 2D images. Spatially correlated intensity variation and warp variation are modeled as random effects, resulting in a nonlinear mixed-effects model that enables simultaneous estimation of template and model parameters by optimization of the likelihood function. We propose an algorithm for fitting the model which alternates estimation of variance parameters and image registration. This approach avoids the potential estimation bias in the template estimate that arises when treating registration as a preprocessing step. We apply the model to datasets of facial images and 2D brain magnetic resonance images to illustrate the simultaneous estimation and prediction of intensity and warp effects

    A Self Organization-Based Optical Flow Estimator with GPU Implementation

    Get PDF
    This work describes a parallelizable optical flow estimator that uses a modified batch version of the Self Organizing Map (SOM). This gradient-based estimator handles the ill-posedness in motion estimation via a novel combination of regression and a self organization strategy. The aperture problem is explicitly modeled using an algebraic framework that partitions motion estimates obtained from regression into two sets, one (set Hc) with estimates with high confidence and another (set Hp) with low confidence estimates. The self organization step uses a uniquely designed pair of training set (Q=Hc) and the initial weights set (W=Hc U Hp). It is shown that with this specific choice of training and initial weights sets, the interpolation of flow vectors is achieved primarily due to the regularization property of SOM. Moreover, the computationally involved step of finding the winner unit in SOM simplifies to indexing into a 2D array making the algorithm parallelizable and highly scalable. To preserve flow discontinuities at occlusion boundaries, we have designed anisotropic neighborhood function for SOM that uses a novel OFCE residual-based distance measure. A multi-resolution or pyramidal approach is used to estimate large motion. As the algorithm is scalable, with sufficient number of computing cores (for example on a GPU), the implementation of the estimator can be made real-time. With the available true motion from Middlebury database, error metrics are computed

    Bayesian Estimation of Turbulent Motion

    Full text link

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Bayesian inference of models and hyper-parameters for robust optic-flow estimation

    Get PDF
    International audienceSelecting optimal models and hyper-parameters is crucial for accurate optic-flow estimation. This paper provides a solution to the problem in a generic Bayesian framework. The method is based on a conditional model linking the image intensity function, the unknown velocity field, hyper-parameters and the prior and likelihood motion models. Inference is performed on each of the three-level of this so-defined hierarchical model by maximization of marginalized \textit{a posteriori} probability distribution functions. In particular, the first level is used to achieve motion estimation in a classical a posteriori scheme. By marginalizing out the motion variable, the second level enables to infer regularization coefficients and hyper-parameters of non-Gaussian M-estimators commonly used in robust statistics. The last level of the hierarchy is used for selection of the likelihood and prior motion models conditioned to the image data. The method is evaluated on image sequences of fluid flows and from the ''Middlebury" database. Experiments prove that applying the proposed inference strategy yields better results than manually tuning smoothing parameters or discontinuity preserving cost functions of the state-of-the-art methods

    Global optimization methods for full-reference and no-reference motion estimation with applications to atherosclerotic plaque motion and strain imaging

    Get PDF
    Pixel-based motion estimation using optical flow models has been extensively researched during the last two decades. The driving force of this research field is the amount of applications that can be developed with the motion estimates. Image segmentation, compression, activity detection, object tracking, pattern recognition, and more recently non-invasive biomedical applications like strain imaging make the estimation of accurate velocity fields necessary. The majority of the research in this area is focused on improving the theoretical and numerical framework of the optical flow models. This effort has resulted in increased method complexity with an increasing number of motion parameters. The standard approach of heuristically setting the motion parameters has become a major source of estimation error. This dissertation is focused in the development of reliable motion estimation based on global parameter optimization methods. Two strategies have been developed. In full-reference optimization, the assumption is that a video training set of realistic motion simulations (or ground truth) are available. Global optimization is used to calculate the best motion parameters that can then be used on a separate set of testing videos. This approach helps provide bounds on what motion estimation methods can achieve. In no-reference optimization, the true displacement field is not available. By optimizing for the agreement between different motion estimation techniques, the no-reference approach closely approximates the best (optimal) motion parameters. The results obtained with the newly developed global no-reference optimization approach agree closely with those produced with the full-reference approach. Moreover, the no-reference approach calculates velocity fields of superior quality than published results for benchmark video sequences. Unreliable velocity estimates are identified using new confidence maps that are associated with the disagreement between methods. Thus, the no-reference global optimization method can provide reliable motion estimation without the need for realistic simulations or access to ground truth. The methods developed in this dissertation are applied to ultrasound videos of carotid artery plaques. The velocity estimates are used to analyze plaque motion and produce novel non-invasive elasticity maps that can help in the identification of vulnerable atherosclerotic plaques

    Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

    Full text link
    We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood, and numerical inverse extra-regularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener-filtering, Tikhonov regularization, Ridge regression, Maximum Entropy, and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman, and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribiere, and Hestenes-Stiefel Conjugate Gradients. The structures of the up-to-date highest-performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener-filter in the novel ARGO-software package, the different numerical schemes are benchmarked with 1-, 2-, and 3-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark-matter density field, the peculiar velocity field, and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.Comment: 40 pages, 11 figure

    SALT2: using distant supernovae to improve the use of Type Ia supernovae as distance indicators

    Get PDF
    We present an empirical model of Type Ia supernovae spectro-photometric evolution with time. The model is built using a large data set including light-curves and spectra of both nearby and distant supernovae, the latter being observed by the SNLS collaboration. We derive the average spectral sequence of Type Ia supernovae and their main variability components including a color variation law. The model allows us to measure distance moduli in the spectral range 2500-8000 A with calculable uncertainties, including those arising from variability of spectral features. Thanks to the use of high-redshift SNe to model the rest-frame UV spectral energy distribution, we are able to derive improved distance estimates for SNe Ia in the redshift range 0.8<z<1.1. The model can also be used to improve spectroscopic identification algorithms, and derive photometric redshifts of distant Type Ia supernovae.Comment: Accepted for publication in A&A. Data and source code available at : http://supernovae.in2p3.fr/~guy/salt
    corecore