144,409 research outputs found

    Large deviations for stochastic flows of diffeomorphisms

    Get PDF
    A large deviation principle is established for a general class of stochastic flows in the small noise limit. This result is then applied to a Bayesian formulation of an image matching problem, and an approximate maximum likelihood property is shown for the solution of an optimization problem involving the large deviations rate function.Comment: Published in at http://dx.doi.org/10.3150/09-BEJ203 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Dense Depth Maps from Epipolar Images

    Get PDF
    Recovering three-dimensional information from two-dimensional images is the fundamental goal of stereo techniques. The problem of recovering depth (three-dimensional information) from a set of images is essentially the correspondence problem: Given a point in one image, find the corresponding point in each of the other images. Finding potential correspondences usually involves matching some image property. If the images are from nearby positions, they will vary only slightly, simplifying the matching process. Once a correspondence is known, solving for the depth is simply a matter of geometry. Real images are composed of noisy, discrete samples, therefore the calculated depth will contain error. This error is a function of the baseline or distance between the images. Longer baselines result in more precise depths. This leads to a conflict: short baselines simplify the matching process, but produce imprecise results; long baselines produce precise results, but complicate the matching process. In this paper, we present a method for generating dense depth maps from large sets (1000's) of images taken from arbitrary positions. Long baseline images improve the accuracy. Short baseline images and the large number of images greatly simplifies the correspondence problem, removing nearly all ambiguity. The algorithm presented is completely local and for each pixel generates an evidence versus depth and surface normal distribution. In many cases, the distribution contains a clear and distinct global maximum. The location of this peak determines the depth and its shape can be used to estimate the error. The distribution can also be used to perform a maximum likelihood fit of models directly to the images. We anticipate that the ability to perform maximum likelihood estimation from purely local calculations will prove extremely useful in constructing three dimensional models from large sets of images

    An In-scene parameter estimation method for quantitative analysis

    Get PDF
    This thesis describes the development of a general in-scene parameter estimation method for quantitative image evaluation. The Maximum Likelihood Ratio (MLR) estimator uses samples from a selected population of known objects in the image to estimate one or more unknown parameters. The estimate is based on statistically matching the population sample residuals to their simulated distribution. The match is characterized by the likelihood ratio function. To compute the likelihood ratio, stochastic simulation is employed to estimate the density of the residuals. The likelihood ratio of the actual residuals and this simulated density is a surface that is then numerically maximized to find the parameter estimate. This in-scene method may be applied to estimating the parameters in many types of aircraft and satellite images. The MLR estimation method is applied to an aerial, thermal infrared heat-loss study to estimate the bias error in the calculation of heat flow. The estimation is shown to substantially improve the prediction of rooftop heat flow for a set of validation structures

    Improved Techniques for Maximum Likelihood Estimation for Diffusion ODEs

    Full text link
    Diffusion models have exhibited excellent performance in various domains. The probability flow ordinary differential equation (ODE) of diffusion models (i.e., diffusion ODEs) is a particular case of continuous normalizing flows (CNFs), which enables deterministic inference and exact likelihood evaluation. However, the likelihood estimation results by diffusion ODEs are still far from those of the state-of-the-art likelihood-based generative models. In this work, we propose several improved techniques for maximum likelihood estimation for diffusion ODEs, including both training and evaluation perspectives. For training, we propose velocity parameterization and explore variance reduction techniques for faster convergence. We also derive an error-bounded high-order flow matching objective for finetuning, which improves the ODE likelihood and smooths its trajectory. For evaluation, we propose a novel training-free truncated-normal dequantization to fill the training-evaluation gap commonly existing in diffusion ODEs. Building upon these techniques, we achieve state-of-the-art likelihood estimation results on image datasets (2.56 on CIFAR-10, 3.43/3.69 on ImageNet-32) without variational dequantization or data augmentation.Comment: Accepted in ICML202

    Kinematics-based tracking of cells and fluorescent beads using feature vectors

    Get PDF
    Tracking of cells or fluorescent beads from images of deforming or developing biological systems is a central challenge in biomechanics. In the former case, the objective is often to find the same cell in a tissue or on a Petri dish that has been imaged before and after time in an incubator. In the latter case, the objective is often to estimate mechanical tractions based upon displacement of fluorescent beads embedded in a defined extracellular matrix. A great number of techniques exist for this purpose, and all face challenges in matching cells and beads from one image to the next and in identifying mismatches. Here, we present a simple, fast, and effective technique for matching cells and beads using “feature vectors” that connect a cell or bead to a set of its nearest neighbors. A generalized feature vector deformation gradient tensor is defined that enables the use of standard kinematics to estimate the maximum likelihood matches between cells or beads in image pairs. We describe the strengths and limitations of the approach and present examples of its application

    New History Matching Methodology for Two Phase Reservoir Using Expectation-Maximization (EM) Algorithm

    Get PDF
    The Expectation-Maximization (EM) Algorithm is a well-known method for estimating maximum likelihood and can be used to find missing numbers in an array. The EM Algorithm has been used extensively in Electrical and Electronics Engineering as well as in the Biometrics industries for image processing but very little use of the EM Algorithm has been seen in the Oil and Gas industry, especially for History Matching. History matching is a non-unique matching of oil rate, water rate, gas rate and bottom hole pressure data of a producing well (known as Producer) as well as the bottom hole pressure and liquid injection of an injecting well (known as Injector) by adjusting reservoir parameters such as permeability, porosity, Corey exponents, compressibility factor, and other pertinent reservoir parameters. EM Algorithm is a statistical method that guarantees convergence and is particularly useful when the likelihood function is a member of the exponential family. On the other hand, EM algorithm can be slow to converge, and may converge to a local optimum of the observed data log likelihood function, depending on the starting values. In this research, our objective is to develop an algorithm that can be used to successfully match the historical production data given sparse field data. Our approach will be to update the permeability multiplier, thereby updating the permeability of each unobserved grid cell that contributes to the production at one or more producing wells. The EM algorithm will be utilized to optimize the permeability multiplier of each contributing unobserved grid cell

    Efficient Exact Inference in Planar Ising Models

    Full text link
    We give polynomial-time algorithms for the exact computation of lowest-energy (ground) states, worst margin violators, log partition functions, and marginal edge probabilities in certain binary undirected graphical models. Our approach provides an interesting alternative to the well-known graph cut paradigm in that it does not impose any submodularity constraints; instead we require planarity to establish a correspondence with perfect matchings (dimer coverings) in an expanded dual graph. We implement a unified framework while delegating complex but well-understood subproblems (planar embedding, maximum-weight perfect matching) to established algorithms for which efficient implementations are freely available. Unlike graph cut methods, we can perform penalized maximum-likelihood as well as maximum-margin parameter estimation in the associated conditional random fields (CRFs), and employ marginal posterior probabilities as well as maximum a posteriori (MAP) states for prediction. Maximum-margin CRF parameter estimation on image denoising and segmentation problems shows our approach to be efficient and effective. A C++ implementation is available from http://nic.schraudolph.org/isinf/Comment: Fixed a number of bugs in v1; added 10 pages of additional figures, explanations, proofs, and experiment
    • …
    corecore