18 research outputs found

    A new ADMM algorithm for the Euclidean median and its application to robust patch regression

    Full text link
    The Euclidean Median (EM) of a set of points Ω\Omega in an Euclidean space is the point x minimizing the (weighted) sum of the Euclidean distances of x to the points in Ω\Omega. While there exits no closed-form expression for the EM, it can nevertheless be computed using iterative methods such as the Wieszfeld algorithm. The EM has classically been used as a robust estimator of centrality for multivariate data. It was recently demonstrated that the EM can be used to perform robust patch-based denoising of images by generalizing the popular Non-Local Means algorithm. In this paper, we propose a novel algorithm for computing the EM (and its box-constrained counterpart) using variable splitting and the method of augmented Lagrangian. The attractive feature of this approach is that the subproblems involved in the ADMM-based optimization of the augmented Lagrangian can be resolved using simple closed-form projections. The proposed ADMM solver is used for robust patch-based image denoising and is shown to exhibit faster convergence compared to an existing solver.Comment: 5 pages, 3 figures, 1 table. To appear in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, April 19-24, 201

    A parallel implementation of 3D Zernike moment analysis

    Get PDF
    Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3D Zernike moments analysis, written in C with CUDA extensions, which makes it practical to employ Zernike descriptors in interactive applications, yielding a performance of several frames per second in voxel datasets about 2003 in size. In our contribution, we describe the challenges of implementing 3D Zernike analysis in a general-purpose GPU. These include how to deal with numerical inaccuracies, due to the high precision demands of the algorithm, or how to deal with the high volume of input data so that it does not become a bottleneck for the system

    Master index of Volumes 21–30

    Get PDF

    Computing medians and means in Hadamard spaces

    Full text link
    The geometric median as well as the Frechet mean of points in an Hadamard space are important in both theory and applications. Surprisingly, no algorithms for their computation are hitherto known. To address this issue, we use a split version of the proximal point algorithm for minimizing a sum of convex functions and prove that this algorithm produces a sequence converging to a minimizer of the objective function, which extends a recent result of D. Bertsekas (2001) into Hadamard spaces. The method is quite robust and not only does it yield algorithms for the median and the mean, but it also applies to various other optimization problems. We moreover show that another algorithm for computing the Frechet mean can be derived from the law of large numbers due to K.-T. Sturm (2002). In applications, computing medians and means is probably most needed in tree space, which is an instance of an Hadamard space, invented by Billera, Holmes, and Vogtmann (2001) as a tool for averaging phylogenetic trees. It turns out, however, that it can be also used to model numerous other tree-like structures. Since there now exists a polynomial-time algorithm for computing geodesics in tree space due to M. Owen and S. Provan (2011), we obtain efficient algorithms for computing medians and means, which can be directly used in practice.Comment: Corrected version. Accepted in SIAM Journal on Optimizatio

    Network Lasso: Clustering and Optimization in Large Graphs

    Full text link
    Convex optimization is an essential tool for modern data analysis, as it provides a framework to formulate and solve many problems in machine learning and data mining. However, general convex optimization solvers do not scale well, and scalable solvers are often specialized to only work on a narrow class of problems. Therefore, there is a need for simple, scalable algorithms that can solve many common optimization problems. In this paper, we introduce the \emph{network lasso}, a generalization of the group lasso to a network setting that allows for simultaneous clustering and optimization on graphs. We develop an algorithm based on the Alternating Direction Method of Multipliers (ADMM) to solve this problem in a distributed and scalable manner, which allows for guaranteed global convergence even on large graphs. We also examine a non-convex extension of this approach. We then demonstrate that many types of problems can be expressed in our framework. We focus on three in particular - binary classification, predicting housing prices, and event detection in time series data - comparing the network lasso to baseline approaches and showing that it is both a fast and accurate method of solving large optimization problems
    corecore