74,485 research outputs found

    An iterative Euclidean algorithm

    Get PDF

    Randomized Extended Kaczmarz for Solving Least-Squares

    Full text link
    We present a randomized iterative algorithm that exponentially converges in expectation to the minimum Euclidean norm least squares solution of a given linear system of equations. The expected number of arithmetic operations required to obtain an estimate of given accuracy is proportional to the square condition number of the system multiplied by the number of non-zeros entries of the input matrix. The proposed algorithm is an extension of the randomized Kaczmarz method that was analyzed by Strohmer and Vershynin.Comment: 19 Pages, 5 figures; code is available at https://github.com/zouzias/RE

    A new ADMM algorithm for the Euclidean median and its application to robust patch regression

    Full text link
    The Euclidean Median (EM) of a set of points Ω\Omega in an Euclidean space is the point x minimizing the (weighted) sum of the Euclidean distances of x to the points in Ω\Omega. While there exits no closed-form expression for the EM, it can nevertheless be computed using iterative methods such as the Wieszfeld algorithm. The EM has classically been used as a robust estimator of centrality for multivariate data. It was recently demonstrated that the EM can be used to perform robust patch-based denoising of images by generalizing the popular Non-Local Means algorithm. In this paper, we propose a novel algorithm for computing the EM (and its box-constrained counterpart) using variable splitting and the method of augmented Lagrangian. The attractive feature of this approach is that the subproblems involved in the ADMM-based optimization of the augmented Lagrangian can be resolved using simple closed-form projections. The proposed ADMM solver is used for robust patch-based image denoising and is shown to exhibit faster convergence compared to an existing solver.Comment: 5 pages, 3 figures, 1 table. To appear in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, April 19-24, 201

    Self-Calibration of Cameras with Euclidean Image Plane in Case of Two Views and Known Relative Rotation Angle

    Full text link
    The internal calibration of a pinhole camera is given by five parameters that are combined into an upper-triangular 3×33\times 3 calibration matrix. If the skew parameter is zero and the aspect ratio is equal to one, then the camera is said to have Euclidean image plane. In this paper, we propose a non-iterative self-calibration algorithm for a camera with Euclidean image plane in case the remaining three internal parameters --- the focal length and the principal point coordinates --- are fixed but unknown. The algorithm requires a set of N7N \geq 7 point correspondences in two views and also the measured relative rotation angle between the views. We show that the problem generically has six solutions (including complex ones). The algorithm has been implemented and tested both on synthetic data and on publicly available real dataset. The experiments demonstrate that the method is correct, numerically stable and robust.Comment: 13 pages, 7 eps-figure

    Fuzzy clustering with Minkowski distance

    Get PDF
    Distances in the well known fuzzy c-means algorithm of Bezdek (1973) are measured by the squared Euclidean distance.Other distances have been used as well in fuzzy clustering. For example, Jajuga (1991) proposed to use the L_1-distance and Bobrowski and Bezdek (1991) also used the L_infty-distance. For the more general case of Minkowski distance and the case of using a root of the squared Minkowski distance, Groenen and Jajuga (2001) introduced a majorization algorithm to minimize the error. One of the advantages of iterative majorization is that it is a guaranteed descent algorithm, so that every iteration reduces the error until convergence is reached.However, their algorithm was limited to the case of Minkowski parameter between 1 and 2, that is, between the L_1-distance and the Euclidean distance. Here, we extend their majorization algorithm to any Minkowski distance with Minkowski parameter greater than (or equal to) 1. This extension also includes the case of the L_infty-distance. We also investigate how well this algorithm performs and present an empirical application.

    Estimating a Polya frequency function_2

    Full text link
    We consider the non-parametric maximum likelihood estimation in the class of Polya frequency functions of order two, viz. the densities with a concave logarithm. This is a subclass of unimodal densities and fairly rich in general. The NPMLE is shown to be the solution to a convex programming problem in the Euclidean space and an algorithm is devised similar to the iterative convex minorant algorithm by Jongbleod (1999). The estimator achieves Hellinger consistency when the true density is a PFF_2 itself.Comment: Published at http://dx.doi.org/10.1214/074921707000000184 in the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Differentially Private Distributed Optimization

    Full text link
    In distributed optimization and iterative consensus literature, a standard problem is for NN agents to minimize a function ff over a subset of Euclidean space, where the cost function is expressed as a sum fi\sum f_i. In this paper, we study the private distributed optimization (PDOP) problem with the additional requirement that the cost function of the individual agents should remain differentially private. The adversary attempts to infer information about the private cost functions from the messages that the agents exchange. Achieving differential privacy requires that any change of an individual's cost function only results in unsubstantial changes in the statistics of the messages. We propose a class of iterative algorithms for solving PDOP, which achieves differential privacy and convergence to the optimal value. Our analysis reveals the dependence of the achieved accuracy and the privacy levels on the the parameters of the algorithm. We observe that to achieve ϵ\epsilon-differential privacy the accuracy of the algorithm has the order of O(1ϵ2)O(\frac{1}{\epsilon^2})
    corecore