15,148 research outputs found

    Non-linear optimization for robust estimation of vanishing points

    Get PDF
    A new method for robust estimation of vanishing points is introduced in this paper. It is based on the MSAC (M-estimator Sample and Consensus) algorithm and on the definition of a new distance function between a vanishing point and a given orientation. Apart from the robustness, our method represents a flexible and efficient solution, since it allows to work with different type of image data, and its iterative nature makes better use of the available information to obtain more accurate estimates. The key issue of the work is the proposed distance function, that makes the error to be independent from the position of an hypothesized vanishing point, which allows to work with points at the infinity. Besides, the estimation process is guided by a non-linear optimization process that enhances the accuracy of the system. The robustness of our proposal, compared with other methods in the literature is shown with a set of tests carried out for both synthetic data and real images. The results show that our approach obtain excellent levels of accuracy and that is definitely robust against the presence of large amounts of outliers, outperforming other state of the art approaches

    Rectification from Radially-Distorted Scales

    Full text link
    This paper introduces the first minimal solvers that jointly estimate lens distortion and affine rectification from repetitions of rigidly transformed coplanar local features. The proposed solvers incorporate lens distortion into the camera model and extend accurate rectification to wide-angle images that contain nearly any type of coplanar repeated content. We demonstrate a principled approach to generating stable minimal solvers by the Grobner basis method, which is accomplished by sampling feasible monomial bases to maximize numerical stability. Synthetic and real-image experiments confirm that the solvers give accurate rectifications from noisy measurements when used in a RANSAC-based estimator. The proposed solvers demonstrate superior robustness to noise compared to the state-of-the-art. The solvers work on scenes without straight lines and, in general, relax the strong assumptions on scene content made by the state-of-the-art. Accurate rectifications on imagery that was taken with narrow focal length to near fish-eye lenses demonstrate the wide applicability of the proposed method. The method is fully automated, and the code is publicly available at https://github.com/prittjam/repeats.Comment: pre-prin

    Radially-Distorted Conjugate Translations

    Full text link
    This paper introduces the first minimal solvers that jointly solve for affine-rectification and radial lens distortion from coplanar repeated patterns. Even with imagery from moderately distorted lenses, plane rectification using the pinhole camera model is inaccurate or invalid. The proposed solvers incorporate lens distortion into the camera model and extend accurate rectification to wide-angle imagery, which is now common from consumer cameras. The solvers are derived from constraints induced by the conjugate translations of an imaged scene plane, which are integrated with the division model for radial lens distortion. The hidden-variable trick with ideal saturation is used to reformulate the constraints so that the solvers generated by the Grobner-basis method are stable, small and fast. Rectification and lens distortion are recovered from either one conjugately translated affine-covariant feature or two independently translated similarity-covariant features. The proposed solvers are used in a \RANSAC-based estimator, which gives accurate rectifications after few iterations. The proposed solvers are evaluated against the state-of-the-art and demonstrate significantly better rectifications on noisy measurements. Qualitative results on diverse imagery demonstrate high-accuracy undistortions and rectifications. The source code is publicly available at https://github.com/prittjam/repeats

    Bispectrum Inversion with Application to Multireference Alignment

    Full text link
    We consider the problem of estimating a signal from noisy circularly-translated versions of itself, called multireference alignment (MRA). One natural approach to MRA could be to estimate the shifts of the observations first, and infer the signal by aligning and averaging the data. In contrast, we consider a method based on estimating the signal directly, using features of the signal that are invariant under translations. Specifically, we estimate the power spectrum and the bispectrum of the signal from the observations. Under mild assumptions, these invariant features contain enough information to infer the signal. In particular, the bispectrum can be used to estimate the Fourier phases. To this end, we propose and analyze a few algorithms. Our main methods consist of non-convex optimization over the smooth manifold of phases. Empirically, in the absence of noise, these non-convex algorithms appear to converge to the target signal with random initialization. The algorithms are also robust to noise. We then suggest three additional methods. These methods are based on frequency marching, semidefinite relaxation and integer programming. The first two methods provably recover the phases exactly in the absence of noise. In the high noise level regime, the invariant features approach for MRA results in stable estimation if the number of measurements scales like the cube of the noise variance, which is the information-theoretic rate. Additionally, it requires only one pass over the data which is important at low signal-to-noise ratio when the number of observations must be large

    Penalized Composite Quasi-Likelihood for Ultrahigh-Dimensional Variable Selection

    Full text link
    In high-dimensional model selection problems, penalized simple least-square approaches have been extensively used. This paper addresses the question of both robustness and efficiency of penalized model selection methods, and proposes a data-driven weighted linear combination of convex loss functions, together with weighted L1L_1-penalty. It is completely data-adaptive and does not require prior knowledge of the error distribution. The weighted L1L_1-penalty is used both to ensure the convexity of the penalty term and to ameliorate the bias caused by the L1L_1-penalty. In the setting with dimensionality much larger than the sample size, we establish a strong oracle property of the proposed method that possesses both the model selection consistency and estimation efficiency for the true non-zero coefficients. As specific examples, we introduce a robust method of composite L1-L2, and optimal composite quantile method and evaluate their performance in both simulated and real data examples

    Robust Estimation and Wavelet Thresholding in Partial Linear Models

    Full text link
    This paper is concerned with a semiparametric partially linear regression model with unknown regression coefficients, an unknown nonparametric function for the non-linear component, and unobservable Gaussian distributed random errors. We present a wavelet thresholding based estimation procedure to estimate the components of the partial linear model by establishing a connection between an l1l_1-penalty based wavelet estimator of the nonparametric component and Huber's M-estimation of a standard linear model with outliers. Some general results on the large sample properties of the estimates of both the parametric and the nonparametric part of the model are established. Simulations and a real example are used to illustrate the general results and to compare the proposed methodology with other methods available in the recent literature
    corecore