875 research outputs found

    Line search multilevel optimization as computational methods for dense optical flow

    Get PDF
    We evaluate the performance of different optimization techniques developed in the context of optical flowcomputation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we develop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional multilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrectional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimization search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow computation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    Fast Variance Prediction for Iteratively Reconstructed CT with Applications to Tube Current Modulation.

    Full text link
    X-ray computed tomography (CT) is an important, widely-used medical imaging modality. A primary concern with the increasing use of CT is the ionizing radiation dose incurred by the patient. Statistical reconstruction methods are able to improve noise and resolution in CT images compared to traditional filter backprojection (FBP) based reconstruction methods, which allows for a reduced radiation dose. Compared to FBP-based methods, statistical reconstruction requires greater computational time and the statistical properties of resulting images are more difficult to analyze. Statistical reconstruction has parameters that must be correctly chosen to produce high-quality images. The variance of the reconstructed image has been used to choose these parameters, but this has previously been very time-consuming to compute. In this work, we use approximations to the local frequency response (LFR) of CT projection and backprojection to predict the variance of statistically reconstructed CT images. Compared to the empirical variance derived from multiple simulated reconstruction realizations, our method is as accurate as the currently available methods of variance prediction while being computable for thousands of voxels per second, faster than these previous methods by a factor of over ten thousand. We also compare our method to empirical variance maps produced from an ensemble of reconstructions from real sinogram data. The LFR can also be used to predict the power spectrum of the noise and the local frequency response of the reconstruction. Tube current modulation (TCM), the redistribution of X-ray dose in CT between different views of a patient, has been demonstrated to reduce dose when the modulation is well-designed. TCM methods currently in use were designed assuming FBP-based image reconstruction. We use our LFR approximation to derive fast methods for predicting the SNR of linear observers of a statistically reconstructed CT image. Using these fast observability and variance prediction methods, we derive TCM methods specific to statistical reconstruction that, in theory, potentially reduce radiation dose by 20% compared to FBP-specific TCM methods.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111463/1/smschm_1.pd

    On Semidefinite Relaxations for Matrix-Weighted State-Estimation Problems in Robotics

    Full text link
    In recent years, there has been remarkable progress in the development of so-called certifiable perception methods, which leverage semidefinite, convex relaxations to find global optima of perception problems in robotics. However, many of these relaxations rely on simplifying assumptions that facilitate the problem formulation, such as an isotropic measurement noise distribution. In this paper, we explore the tightness of the semidefinite relaxations of matrix-weighted (anisotropic) state-estimation problems and reveal the limitations lurking therein: matrix-weighted factors can cause convex relaxations to lose tightness. In particular, we show that the semidefinite relaxations of localization problems with matrix weights may be tight only for low noise levels. We empirically explore the factors that contribute to this loss of tightness and demonstrate that redundant constraints can be used to regain tightness, albeit at the expense of real-time performance. As a second technical contribution of this paper, we show that the state-of-the-art relaxation of scalar-weighted SLAM cannot be used when matrix weights are considered. We provide an alternate formulation and show that its SDP relaxation is not tight (even for very low noise levels) unless specific redundant constraints are used. We demonstrate the tightness of our formulations on both simulated and real-world data

    Confidence driven TGV fusion

    Full text link
    We introduce a novel model for spatially varying variational data fusion, driven by point-wise confidence values. The proposed model allows for the joint estimation of the data and the confidence values based on the spatial coherence of the data. We discuss the main properties of the introduced model as well as suitable algorithms for estimating the solution of the corresponding biconvex minimization problem and their convergence. The performance of the proposed model is evaluated considering the problem of depth image fusion by using both synthetic and real data from publicly available datasets

    Using the Sharp Operator for edge detection and nonlinear diffusion

    Get PDF
    In this paper we investigate the use of the sharp function known from functional analysis in image processing. The sharp function gives a measure of the variations of a function and can be used as an edge detector. We extend the classical notion of the sharp function for measuring anisotropic behaviour and give a fast anisotropic edge detection variant inspired by the sharp function. We show that these edge detection results are useful to steer isotropic and anisotropic nonlinear diffusion filters for image enhancement
    • …
    corecore